00:00:00.001 Started by upstream project "autotest-per-patch" build number 126253 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.144 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.183 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.183 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.403 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.417 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.430 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.431 > git config core.sparsecheckout # timeout=10 00:00:02.441 > git read-tree -mu HEAD # timeout=10 00:00:02.459 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.481 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.481 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.581 [Pipeline] Start of Pipeline 00:00:02.600 [Pipeline] library 00:00:02.603 Loading library shm_lib@master 00:00:02.603 Library shm_lib@master is cached. Copying from home. 00:00:02.626 [Pipeline] node 00:00:02.635 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.638 [Pipeline] { 00:00:02.652 [Pipeline] catchError 00:00:02.654 [Pipeline] { 00:00:02.671 [Pipeline] wrap 00:00:02.684 [Pipeline] { 00:00:02.695 [Pipeline] stage 00:00:02.697 [Pipeline] { (Prologue) 00:00:02.886 [Pipeline] sh 00:00:03.165 + logger -p user.info -t JENKINS-CI 00:00:03.185 [Pipeline] echo 00:00:03.187 Node: GP6 00:00:03.194 [Pipeline] sh 00:00:03.492 [Pipeline] setCustomBuildProperty 00:00:03.508 [Pipeline] echo 00:00:03.510 Cleanup processes 00:00:03.520 [Pipeline] sh 00:00:03.812 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.812 3573170 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.827 [Pipeline] sh 00:00:04.111 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.111 ++ grep -v 'sudo pgrep' 00:00:04.111 ++ awk '{print $1}' 00:00:04.111 + sudo kill -9 00:00:04.111 + true 00:00:04.130 [Pipeline] cleanWs 00:00:04.141 [WS-CLEANUP] Deleting project workspace... 00:00:04.141 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.150 [WS-CLEANUP] done 00:00:04.153 [Pipeline] setCustomBuildProperty 00:00:04.170 [Pipeline] sh 00:00:04.451 + sudo git config --global --replace-all safe.directory '*' 00:00:04.535 [Pipeline] httpRequest 00:00:04.557 [Pipeline] echo 00:00:04.559 Sorcerer 10.211.164.101 is alive 00:00:04.567 [Pipeline] httpRequest 00:00:04.571 HttpMethod: GET 00:00:04.572 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.572 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.575 Response Code: HTTP/1.1 200 OK 00:00:04.576 Success: Status code 200 is in the accepted range: 200,404 00:00:04.577 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.355 [Pipeline] sh 00:00:05.642 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.660 [Pipeline] httpRequest 00:00:05.702 [Pipeline] echo 00:00:05.703 Sorcerer 10.211.164.101 is alive 00:00:05.713 [Pipeline] httpRequest 00:00:05.718 HttpMethod: GET 00:00:05.719 URL: http://10.211.164.101/packages/spdk_1053f1b138c7e205b9eb35d47b91a730f8ce53aa.tar.gz 00:00:05.719 Sending request to url: http://10.211.164.101/packages/spdk_1053f1b138c7e205b9eb35d47b91a730f8ce53aa.tar.gz 00:00:05.737 Response Code: HTTP/1.1 200 OK 00:00:05.738 Success: Status code 200 is in the accepted range: 200,404 00:00:05.738 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1053f1b138c7e205b9eb35d47b91a730f8ce53aa.tar.gz 00:01:14.648 [Pipeline] sh 00:01:14.953 + tar --no-same-owner -xf spdk_1053f1b138c7e205b9eb35d47b91a730f8ce53aa.tar.gz 00:01:18.267 [Pipeline] sh 00:01:18.545 + git -C spdk log --oneline -n5 00:01:18.545 1053f1b13 util: don't allow users to pass caddr/cport for listen sockets 00:01:18.545 0663932f5 util: add spdk_net_getaddr 00:01:18.545 9da437b46 util: move module/sock/sock_kernel.h contents to net.c 00:01:18.545 35c6d81e6 util: add spdk_net_get_interface_name 00:01:18.545 f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:01:18.557 [Pipeline] } 00:01:18.574 [Pipeline] // stage 00:01:18.582 [Pipeline] stage 00:01:18.583 [Pipeline] { (Prepare) 00:01:18.596 [Pipeline] writeFile 00:01:18.607 [Pipeline] sh 00:01:18.888 + logger -p user.info -t JENKINS-CI 00:01:18.899 [Pipeline] sh 00:01:19.183 + logger -p user.info -t JENKINS-CI 00:01:19.195 [Pipeline] sh 00:01:19.477 + cat autorun-spdk.conf 00:01:19.477 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.477 SPDK_TEST_NVMF=1 00:01:19.477 SPDK_TEST_NVME_CLI=1 00:01:19.477 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.477 SPDK_TEST_NVMF_NICS=e810 00:01:19.477 SPDK_TEST_VFIOUSER=1 00:01:19.477 SPDK_RUN_UBSAN=1 00:01:19.478 NET_TYPE=phy 00:01:19.486 RUN_NIGHTLY=0 00:01:19.491 [Pipeline] readFile 00:01:19.519 [Pipeline] withEnv 00:01:19.521 [Pipeline] { 00:01:19.535 [Pipeline] sh 00:01:19.819 + set -ex 00:01:19.819 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.819 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.819 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.819 ++ SPDK_TEST_NVMF=1 00:01:19.819 ++ SPDK_TEST_NVME_CLI=1 00:01:19.819 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.819 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.819 ++ SPDK_TEST_VFIOUSER=1 00:01:19.819 ++ SPDK_RUN_UBSAN=1 00:01:19.819 ++ NET_TYPE=phy 00:01:19.819 ++ RUN_NIGHTLY=0 00:01:19.819 + case $SPDK_TEST_NVMF_NICS in 00:01:19.819 + DRIVERS=ice 00:01:19.819 + [[ tcp == \r\d\m\a ]] 00:01:19.819 + [[ -n ice ]] 00:01:19.819 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.819 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.819 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.819 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.819 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.819 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.819 + true 00:01:19.819 + for D in $DRIVERS 00:01:19.819 + sudo modprobe ice 00:01:19.819 + exit 0 00:01:19.830 [Pipeline] } 00:01:19.848 [Pipeline] // withEnv 00:01:19.853 [Pipeline] } 00:01:19.865 [Pipeline] // stage 00:01:19.875 [Pipeline] catchError 00:01:19.877 [Pipeline] { 00:01:19.886 [Pipeline] timeout 00:01:19.886 Timeout set to expire in 50 min 00:01:19.888 [Pipeline] { 00:01:19.899 [Pipeline] stage 00:01:19.901 [Pipeline] { (Tests) 00:01:19.912 [Pipeline] sh 00:01:20.193 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.193 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.193 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.193 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.193 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.193 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.193 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.193 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.193 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.193 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.194 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.194 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.194 + source /etc/os-release 00:01:20.194 ++ NAME='Fedora Linux' 00:01:20.194 ++ VERSION='38 (Cloud Edition)' 00:01:20.194 ++ ID=fedora 00:01:20.194 ++ VERSION_ID=38 00:01:20.194 ++ VERSION_CODENAME= 00:01:20.194 ++ PLATFORM_ID=platform:f38 00:01:20.194 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.194 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.194 ++ LOGO=fedora-logo-icon 00:01:20.194 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.194 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.194 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.194 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.194 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.194 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.194 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.194 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.194 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.194 ++ SUPPORT_END=2024-05-14 00:01:20.194 ++ VARIANT='Cloud Edition' 00:01:20.194 ++ VARIANT_ID=cloud 00:01:20.194 + uname -a 00:01:20.194 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.194 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:21.131 Hugepages 00:01:21.131 node hugesize free / total 00:01:21.131 node0 1048576kB 0 / 0 00:01:21.131 node0 2048kB 0 / 0 00:01:21.131 node1 1048576kB 0 / 0 00:01:21.131 node1 2048kB 0 / 0 00:01:21.132 00:01:21.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.132 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:21.132 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:21.132 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:21.132 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:21.132 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:21.132 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:21.132 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:21.132 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:21.390 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:21.390 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:21.390 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:21.390 + rm -f /tmp/spdk-ld-path 00:01:21.390 + source autorun-spdk.conf 00:01:21.390 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.390 ++ SPDK_TEST_NVMF=1 00:01:21.390 ++ SPDK_TEST_NVME_CLI=1 00:01:21.390 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.390 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.390 ++ SPDK_TEST_VFIOUSER=1 00:01:21.390 ++ SPDK_RUN_UBSAN=1 00:01:21.390 ++ NET_TYPE=phy 00:01:21.390 ++ RUN_NIGHTLY=0 00:01:21.390 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.390 + [[ -n '' ]] 00:01:21.390 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.390 + for M in /var/spdk/build-*-manifest.txt 00:01:21.390 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.390 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.390 + for M in /var/spdk/build-*-manifest.txt 00:01:21.390 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.390 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.390 ++ uname 00:01:21.390 + [[ Linux == \L\i\n\u\x ]] 00:01:21.390 + sudo dmesg -T 00:01:21.390 + sudo dmesg --clear 00:01:21.390 + dmesg_pid=3574465 00:01:21.390 + [[ Fedora Linux == FreeBSD ]] 00:01:21.390 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.390 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.390 + sudo dmesg -Tw 00:01:21.390 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.390 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:21.390 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:21.390 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.390 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.390 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.390 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.390 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.390 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.390 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.390 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.390 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.390 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.390 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.390 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.390 Test configuration: 00:01:21.390 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.390 SPDK_TEST_NVMF=1 00:01:21.390 SPDK_TEST_NVME_CLI=1 00:01:21.390 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.390 SPDK_TEST_NVMF_NICS=e810 00:01:21.390 SPDK_TEST_VFIOUSER=1 00:01:21.390 SPDK_RUN_UBSAN=1 00:01:21.390 NET_TYPE=phy 00:01:21.390 RUN_NIGHTLY=0 23:27:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:21.390 23:27:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.390 23:27:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.390 23:27:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.390 23:27:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.390 23:27:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.390 23:27:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.390 23:27:56 -- paths/export.sh@5 -- $ export PATH 00:01:21.390 23:27:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.390 23:27:56 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:21.390 23:27:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:21.390 23:27:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721078876.XXXXXX 00:01:21.390 23:27:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721078876.nI7W4z 00:01:21.390 23:27:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:21.390 23:27:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:21.390 23:27:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:21.390 23:27:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.390 23:27:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.390 23:27:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:21.390 23:27:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:21.390 23:27:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.391 23:27:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:21.391 23:27:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:21.391 23:27:56 -- pm/common@17 -- $ local monitor 00:01:21.391 23:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.391 23:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.391 23:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.391 23:27:56 -- pm/common@21 -- $ date +%s 00:01:21.391 23:27:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.391 23:27:56 -- pm/common@21 -- $ date +%s 00:01:21.391 23:27:56 -- pm/common@25 -- $ sleep 1 00:01:21.391 23:27:56 -- pm/common@21 -- $ date +%s 00:01:21.391 23:27:56 -- pm/common@21 -- $ date +%s 00:01:21.391 23:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078876 00:01:21.391 23:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078876 00:01:21.391 23:27:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078876 00:01:21.391 23:27:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078876 00:01:21.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078876_collect-vmstat.pm.log 00:01:21.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078876_collect-cpu-load.pm.log 00:01:21.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078876_collect-cpu-temp.pm.log 00:01:21.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078876_collect-bmc-pm.bmc.pm.log 00:01:22.327 23:27:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:22.327 23:27:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.327 23:27:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.327 23:27:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.327 23:27:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.327 Mon Jul 15 09:27:57 PM UTC 2024 00:01:22.327 23:27:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.327 v24.09-pre-218-g1053f1b13 00:01:22.327 23:27:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.327 23:27:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.327 23:27:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.327 23:27:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:22.327 23:27:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:22.327 23:27:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.586 ************************************ 00:01:22.586 START TEST ubsan 00:01:22.586 ************************************ 00:01:22.586 23:27:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:22.586 using ubsan 00:01:22.586 00:01:22.586 real 0m0.000s 00:01:22.586 user 0m0.000s 00:01:22.586 sys 0m0.000s 00:01:22.586 23:27:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:22.586 23:27:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.586 ************************************ 00:01:22.586 END TEST ubsan 00:01:22.586 ************************************ 00:01:22.586 23:27:57 -- common/autotest_common.sh@1142 -- $ return 0 00:01:22.586 23:27:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.586 23:27:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.586 23:27:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.586 23:27:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:22.586 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:22.586 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.845 Using 'verbs' RDMA provider 00:01:33.398 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.375 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.375 Creating mk/config.mk...done. 00:01:43.375 Creating mk/cc.flags.mk...done. 00:01:43.375 Type 'make' to build. 00:01:43.375 23:28:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:43.375 23:28:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.375 23:28:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.375 23:28:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.375 ************************************ 00:01:43.375 START TEST make 00:01:43.375 ************************************ 00:01:43.375 23:28:18 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:43.375 make[1]: Nothing to be done for 'all'. 00:01:44.760 The Meson build system 00:01:44.760 Version: 1.3.1 00:01:44.760 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.760 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.760 Build type: native build 00:01:44.760 Project name: libvfio-user 00:01:44.760 Project version: 0.0.1 00:01:44.760 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.760 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.760 Host machine cpu family: x86_64 00:01:44.760 Host machine cpu: x86_64 00:01:44.760 Run-time dependency threads found: YES 00:01:44.760 Library dl found: YES 00:01:44.760 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.760 Run-time dependency json-c found: YES 0.17 00:01:44.760 Run-time dependency cmocka found: YES 1.1.7 00:01:44.760 Program pytest-3 found: NO 00:01:44.760 Program flake8 found: NO 00:01:44.760 Program misspell-fixer found: NO 00:01:44.760 Program restructuredtext-lint found: NO 00:01:44.760 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.760 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.760 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.760 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.760 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.760 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.760 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.760 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.760 Build targets in project: 8 00:01:44.760 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.760 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.760 00:01:44.760 libvfio-user 0.0.1 00:01:44.760 00:01:44.760 User defined options 00:01:44.760 buildtype : debug 00:01:44.760 default_library: shared 00:01:44.760 libdir : /usr/local/lib 00:01:44.760 00:01:44.760 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.705 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.705 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:45.969 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:45.969 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.969 [4/37] Compiling C object samples/null.p/null.c.o 00:01:45.969 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:45.969 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:45.969 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:45.969 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:45.969 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.969 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:45.969 [11/37] Compiling C object samples/server.p/server.c.o 00:01:45.969 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.969 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.969 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:45.969 [15/37] Compiling C object samples/client.p/client.c.o 00:01:45.969 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:45.969 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:45.969 [18/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:45.969 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:45.969 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.969 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.969 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.969 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:45.969 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:45.970 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.970 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.970 [27/37] Linking target samples/client 00:01:45.970 [28/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:46.231 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:46.231 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:46.231 [31/37] Linking target test/unit_tests 00:01:46.231 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.493 [33/37] Linking target samples/server 00:01:46.493 [34/37] Linking target samples/null 00:01:46.493 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:46.493 [36/37] Linking target samples/lspci 00:01:46.493 [37/37] Linking target samples/gpio-pci-idio-16 00:01:46.493 INFO: autodetecting backend as ninja 00:01:46.493 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.493 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.126 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.126 ninja: no work to do. 00:01:52.405 The Meson build system 00:01:52.405 Version: 1.3.1 00:01:52.405 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:52.405 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:52.405 Build type: native build 00:01:52.405 Program cat found: YES (/usr/bin/cat) 00:01:52.405 Project name: DPDK 00:01:52.405 Project version: 24.03.0 00:01:52.405 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.405 C linker for the host machine: cc ld.bfd 2.39-16 00:01:52.405 Host machine cpu family: x86_64 00:01:52.405 Host machine cpu: x86_64 00:01:52.405 Message: ## Building in Developer Mode ## 00:01:52.405 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.405 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.405 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.405 Program python3 found: YES (/usr/bin/python3) 00:01:52.405 Program cat found: YES (/usr/bin/cat) 00:01:52.405 Compiler for C supports arguments -march=native: YES 00:01:52.405 Checking for size of "void *" : 8 00:01:52.405 Checking for size of "void *" : 8 (cached) 00:01:52.405 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:52.405 Library m found: YES 00:01:52.405 Library numa found: YES 00:01:52.405 Has header "numaif.h" : YES 00:01:52.405 Library fdt found: NO 00:01:52.405 Library execinfo found: NO 00:01:52.405 Has header "execinfo.h" : YES 00:01:52.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.405 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.405 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.405 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.405 Run-time dependency openssl found: YES 3.0.9 00:01:52.405 Run-time dependency libpcap found: YES 1.10.4 00:01:52.405 Has header "pcap.h" with dependency libpcap: YES 00:01:52.405 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.405 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.405 Compiler for C supports arguments -Wformat: YES 00:01:52.405 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.405 Compiler for C supports arguments -Wformat-security: NO 00:01:52.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.405 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.405 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.405 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.405 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.405 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.405 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.405 Compiler for C supports arguments -Wundef: YES 00:01:52.405 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.405 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.405 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.406 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.406 Program objdump found: YES (/usr/bin/objdump) 00:01:52.406 Compiler for C supports arguments -mavx512f: YES 00:01:52.406 Checking if "AVX512 checking" compiles: YES 00:01:52.406 Fetching value of define "__SSE4_2__" : 1 00:01:52.406 Fetching value of define "__AES__" : 1 00:01:52.406 Fetching value of define "__AVX__" : 1 00:01:52.406 Fetching value of define "__AVX2__" : (undefined) 00:01:52.406 Fetching value of define "__AVX512BW__" : (undefined) 00:01:52.406 Fetching value of define "__AVX512CD__" : (undefined) 00:01:52.406 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:52.406 Fetching value of define "__AVX512F__" : (undefined) 00:01:52.406 Fetching value of define "__AVX512VL__" : (undefined) 00:01:52.406 Fetching value of define "__PCLMUL__" : 1 00:01:52.406 Fetching value of define "__RDRND__" : 1 00:01:52.406 Fetching value of define "__RDSEED__" : (undefined) 00:01:52.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.406 Fetching value of define "__znver1__" : (undefined) 00:01:52.406 Fetching value of define "__znver2__" : (undefined) 00:01:52.406 Fetching value of define "__znver3__" : (undefined) 00:01:52.406 Fetching value of define "__znver4__" : (undefined) 00:01:52.406 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.406 Message: lib/log: Defining dependency "log" 00:01:52.406 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.406 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.406 Checking for function "getentropy" : NO 00:01:52.406 Message: lib/eal: Defining dependency "eal" 00:01:52.406 Message: lib/ring: Defining dependency "ring" 00:01:52.406 Message: lib/rcu: Defining dependency "rcu" 00:01:52.406 Message: lib/mempool: Defining dependency "mempool" 00:01:52.406 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.406 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.406 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.406 Compiler for C supports arguments -mpclmul: YES 00:01:52.406 Compiler for C supports arguments -maes: YES 00:01:52.406 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.406 Compiler for C supports arguments -mavx512bw: YES 00:01:52.406 Compiler for C supports arguments -mavx512dq: YES 00:01:52.406 Compiler for C supports arguments -mavx512vl: YES 00:01:52.406 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.406 Compiler for C supports arguments -mavx2: YES 00:01:52.406 Compiler for C supports arguments -mavx: YES 00:01:52.406 Message: lib/net: Defining dependency "net" 00:01:52.406 Message: lib/meter: Defining dependency "meter" 00:01:52.406 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.406 Message: lib/pci: Defining dependency "pci" 00:01:52.406 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.406 Message: lib/hash: Defining dependency "hash" 00:01:52.406 Message: lib/timer: Defining dependency "timer" 00:01:52.406 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.406 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.406 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.406 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.406 Message: lib/power: Defining dependency "power" 00:01:52.406 Message: lib/reorder: Defining dependency "reorder" 00:01:52.406 Message: lib/security: Defining dependency "security" 00:01:52.406 Has header "linux/userfaultfd.h" : YES 00:01:52.406 Has header "linux/vduse.h" : YES 00:01:52.406 Message: lib/vhost: Defining dependency "vhost" 00:01:52.406 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.406 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.406 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.406 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.406 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.406 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.406 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.406 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.406 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.406 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.406 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.406 Configuring doxy-api-html.conf using configuration 00:01:52.406 Configuring doxy-api-man.conf using configuration 00:01:52.406 Program mandb found: YES (/usr/bin/mandb) 00:01:52.406 Program sphinx-build found: NO 00:01:52.406 Configuring rte_build_config.h using configuration 00:01:52.406 Message: 00:01:52.406 ================= 00:01:52.406 Applications Enabled 00:01:52.406 ================= 00:01:52.406 00:01:52.406 apps: 00:01:52.406 00:01:52.406 00:01:52.406 Message: 00:01:52.406 ================= 00:01:52.406 Libraries Enabled 00:01:52.406 ================= 00:01:52.406 00:01:52.406 libs: 00:01:52.406 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.406 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.406 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.406 00:01:52.406 Message: 00:01:52.406 =============== 00:01:52.406 Drivers Enabled 00:01:52.406 =============== 00:01:52.406 00:01:52.406 common: 00:01:52.406 00:01:52.406 bus: 00:01:52.406 pci, vdev, 00:01:52.406 mempool: 00:01:52.406 ring, 00:01:52.406 dma: 00:01:52.406 00:01:52.406 net: 00:01:52.406 00:01:52.406 crypto: 00:01:52.406 00:01:52.406 compress: 00:01:52.406 00:01:52.406 vdpa: 00:01:52.406 00:01:52.406 00:01:52.406 Message: 00:01:52.406 ================= 00:01:52.406 Content Skipped 00:01:52.406 ================= 00:01:52.406 00:01:52.406 apps: 00:01:52.406 dumpcap: explicitly disabled via build config 00:01:52.406 graph: explicitly disabled via build config 00:01:52.406 pdump: explicitly disabled via build config 00:01:52.406 proc-info: explicitly disabled via build config 00:01:52.406 test-acl: explicitly disabled via build config 00:01:52.406 test-bbdev: explicitly disabled via build config 00:01:52.406 test-cmdline: explicitly disabled via build config 00:01:52.406 test-compress-perf: explicitly disabled via build config 00:01:52.406 test-crypto-perf: explicitly disabled via build config 00:01:52.406 test-dma-perf: explicitly disabled via build config 00:01:52.406 test-eventdev: explicitly disabled via build config 00:01:52.406 test-fib: explicitly disabled via build config 00:01:52.406 test-flow-perf: explicitly disabled via build config 00:01:52.406 test-gpudev: explicitly disabled via build config 00:01:52.406 test-mldev: explicitly disabled via build config 00:01:52.406 test-pipeline: explicitly disabled via build config 00:01:52.406 test-pmd: explicitly disabled via build config 00:01:52.406 test-regex: explicitly disabled via build config 00:01:52.406 test-sad: explicitly disabled via build config 00:01:52.406 test-security-perf: explicitly disabled via build config 00:01:52.406 00:01:52.406 libs: 00:01:52.406 argparse: explicitly disabled via build config 00:01:52.406 metrics: explicitly disabled via build config 00:01:52.406 acl: explicitly disabled via build config 00:01:52.406 bbdev: explicitly disabled via build config 00:01:52.406 bitratestats: explicitly disabled via build config 00:01:52.406 bpf: explicitly disabled via build config 00:01:52.406 cfgfile: explicitly disabled via build config 00:01:52.406 distributor: explicitly disabled via build config 00:01:52.406 efd: explicitly disabled via build config 00:01:52.406 eventdev: explicitly disabled via build config 00:01:52.406 dispatcher: explicitly disabled via build config 00:01:52.406 gpudev: explicitly disabled via build config 00:01:52.406 gro: explicitly disabled via build config 00:01:52.406 gso: explicitly disabled via build config 00:01:52.406 ip_frag: explicitly disabled via build config 00:01:52.406 jobstats: explicitly disabled via build config 00:01:52.406 latencystats: explicitly disabled via build config 00:01:52.406 lpm: explicitly disabled via build config 00:01:52.406 member: explicitly disabled via build config 00:01:52.406 pcapng: explicitly disabled via build config 00:01:52.406 rawdev: explicitly disabled via build config 00:01:52.406 regexdev: explicitly disabled via build config 00:01:52.406 mldev: explicitly disabled via build config 00:01:52.406 rib: explicitly disabled via build config 00:01:52.406 sched: explicitly disabled via build config 00:01:52.406 stack: explicitly disabled via build config 00:01:52.406 ipsec: explicitly disabled via build config 00:01:52.406 pdcp: explicitly disabled via build config 00:01:52.406 fib: explicitly disabled via build config 00:01:52.406 port: explicitly disabled via build config 00:01:52.406 pdump: explicitly disabled via build config 00:01:52.406 table: explicitly disabled via build config 00:01:52.406 pipeline: explicitly disabled via build config 00:01:52.406 graph: explicitly disabled via build config 00:01:52.406 node: explicitly disabled via build config 00:01:52.406 00:01:52.406 drivers: 00:01:52.406 common/cpt: not in enabled drivers build config 00:01:52.406 common/dpaax: not in enabled drivers build config 00:01:52.406 common/iavf: not in enabled drivers build config 00:01:52.406 common/idpf: not in enabled drivers build config 00:01:52.406 common/ionic: not in enabled drivers build config 00:01:52.406 common/mvep: not in enabled drivers build config 00:01:52.406 common/octeontx: not in enabled drivers build config 00:01:52.406 bus/auxiliary: not in enabled drivers build config 00:01:52.406 bus/cdx: not in enabled drivers build config 00:01:52.406 bus/dpaa: not in enabled drivers build config 00:01:52.406 bus/fslmc: not in enabled drivers build config 00:01:52.406 bus/ifpga: not in enabled drivers build config 00:01:52.406 bus/platform: not in enabled drivers build config 00:01:52.406 bus/uacce: not in enabled drivers build config 00:01:52.406 bus/vmbus: not in enabled drivers build config 00:01:52.406 common/cnxk: not in enabled drivers build config 00:01:52.406 common/mlx5: not in enabled drivers build config 00:01:52.406 common/nfp: not in enabled drivers build config 00:01:52.406 common/nitrox: not in enabled drivers build config 00:01:52.406 common/qat: not in enabled drivers build config 00:01:52.406 common/sfc_efx: not in enabled drivers build config 00:01:52.406 mempool/bucket: not in enabled drivers build config 00:01:52.406 mempool/cnxk: not in enabled drivers build config 00:01:52.407 mempool/dpaa: not in enabled drivers build config 00:01:52.407 mempool/dpaa2: not in enabled drivers build config 00:01:52.407 mempool/octeontx: not in enabled drivers build config 00:01:52.407 mempool/stack: not in enabled drivers build config 00:01:52.407 dma/cnxk: not in enabled drivers build config 00:01:52.407 dma/dpaa: not in enabled drivers build config 00:01:52.407 dma/dpaa2: not in enabled drivers build config 00:01:52.407 dma/hisilicon: not in enabled drivers build config 00:01:52.407 dma/idxd: not in enabled drivers build config 00:01:52.407 dma/ioat: not in enabled drivers build config 00:01:52.407 dma/skeleton: not in enabled drivers build config 00:01:52.407 net/af_packet: not in enabled drivers build config 00:01:52.407 net/af_xdp: not in enabled drivers build config 00:01:52.407 net/ark: not in enabled drivers build config 00:01:52.407 net/atlantic: not in enabled drivers build config 00:01:52.407 net/avp: not in enabled drivers build config 00:01:52.407 net/axgbe: not in enabled drivers build config 00:01:52.407 net/bnx2x: not in enabled drivers build config 00:01:52.407 net/bnxt: not in enabled drivers build config 00:01:52.407 net/bonding: not in enabled drivers build config 00:01:52.407 net/cnxk: not in enabled drivers build config 00:01:52.407 net/cpfl: not in enabled drivers build config 00:01:52.407 net/cxgbe: not in enabled drivers build config 00:01:52.407 net/dpaa: not in enabled drivers build config 00:01:52.407 net/dpaa2: not in enabled drivers build config 00:01:52.407 net/e1000: not in enabled drivers build config 00:01:52.407 net/ena: not in enabled drivers build config 00:01:52.407 net/enetc: not in enabled drivers build config 00:01:52.407 net/enetfec: not in enabled drivers build config 00:01:52.407 net/enic: not in enabled drivers build config 00:01:52.407 net/failsafe: not in enabled drivers build config 00:01:52.407 net/fm10k: not in enabled drivers build config 00:01:52.407 net/gve: not in enabled drivers build config 00:01:52.407 net/hinic: not in enabled drivers build config 00:01:52.407 net/hns3: not in enabled drivers build config 00:01:52.407 net/i40e: not in enabled drivers build config 00:01:52.407 net/iavf: not in enabled drivers build config 00:01:52.407 net/ice: not in enabled drivers build config 00:01:52.407 net/idpf: not in enabled drivers build config 00:01:52.407 net/igc: not in enabled drivers build config 00:01:52.407 net/ionic: not in enabled drivers build config 00:01:52.407 net/ipn3ke: not in enabled drivers build config 00:01:52.407 net/ixgbe: not in enabled drivers build config 00:01:52.407 net/mana: not in enabled drivers build config 00:01:52.407 net/memif: not in enabled drivers build config 00:01:52.407 net/mlx4: not in enabled drivers build config 00:01:52.407 net/mlx5: not in enabled drivers build config 00:01:52.407 net/mvneta: not in enabled drivers build config 00:01:52.407 net/mvpp2: not in enabled drivers build config 00:01:52.407 net/netvsc: not in enabled drivers build config 00:01:52.407 net/nfb: not in enabled drivers build config 00:01:52.407 net/nfp: not in enabled drivers build config 00:01:52.407 net/ngbe: not in enabled drivers build config 00:01:52.407 net/null: not in enabled drivers build config 00:01:52.407 net/octeontx: not in enabled drivers build config 00:01:52.407 net/octeon_ep: not in enabled drivers build config 00:01:52.407 net/pcap: not in enabled drivers build config 00:01:52.407 net/pfe: not in enabled drivers build config 00:01:52.407 net/qede: not in enabled drivers build config 00:01:52.407 net/ring: not in enabled drivers build config 00:01:52.407 net/sfc: not in enabled drivers build config 00:01:52.407 net/softnic: not in enabled drivers build config 00:01:52.407 net/tap: not in enabled drivers build config 00:01:52.407 net/thunderx: not in enabled drivers build config 00:01:52.407 net/txgbe: not in enabled drivers build config 00:01:52.407 net/vdev_netvsc: not in enabled drivers build config 00:01:52.407 net/vhost: not in enabled drivers build config 00:01:52.407 net/virtio: not in enabled drivers build config 00:01:52.407 net/vmxnet3: not in enabled drivers build config 00:01:52.407 raw/*: missing internal dependency, "rawdev" 00:01:52.407 crypto/armv8: not in enabled drivers build config 00:01:52.407 crypto/bcmfs: not in enabled drivers build config 00:01:52.407 crypto/caam_jr: not in enabled drivers build config 00:01:52.407 crypto/ccp: not in enabled drivers build config 00:01:52.407 crypto/cnxk: not in enabled drivers build config 00:01:52.407 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.407 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.407 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.407 crypto/mlx5: not in enabled drivers build config 00:01:52.407 crypto/mvsam: not in enabled drivers build config 00:01:52.407 crypto/nitrox: not in enabled drivers build config 00:01:52.407 crypto/null: not in enabled drivers build config 00:01:52.407 crypto/octeontx: not in enabled drivers build config 00:01:52.407 crypto/openssl: not in enabled drivers build config 00:01:52.407 crypto/scheduler: not in enabled drivers build config 00:01:52.407 crypto/uadk: not in enabled drivers build config 00:01:52.407 crypto/virtio: not in enabled drivers build config 00:01:52.407 compress/isal: not in enabled drivers build config 00:01:52.407 compress/mlx5: not in enabled drivers build config 00:01:52.407 compress/nitrox: not in enabled drivers build config 00:01:52.407 compress/octeontx: not in enabled drivers build config 00:01:52.407 compress/zlib: not in enabled drivers build config 00:01:52.407 regex/*: missing internal dependency, "regexdev" 00:01:52.407 ml/*: missing internal dependency, "mldev" 00:01:52.407 vdpa/ifc: not in enabled drivers build config 00:01:52.407 vdpa/mlx5: not in enabled drivers build config 00:01:52.407 vdpa/nfp: not in enabled drivers build config 00:01:52.407 vdpa/sfc: not in enabled drivers build config 00:01:52.407 event/*: missing internal dependency, "eventdev" 00:01:52.407 baseband/*: missing internal dependency, "bbdev" 00:01:52.407 gpu/*: missing internal dependency, "gpudev" 00:01:52.407 00:01:52.407 00:01:52.407 Build targets in project: 85 00:01:52.407 00:01:52.407 DPDK 24.03.0 00:01:52.407 00:01:52.407 User defined options 00:01:52.407 buildtype : debug 00:01:52.407 default_library : shared 00:01:52.407 libdir : lib 00:01:52.407 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:52.407 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:52.407 c_link_args : 00:01:52.407 cpu_instruction_set: native 00:01:52.407 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:52.407 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:52.407 enable_docs : false 00:01:52.407 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.407 enable_kmods : false 00:01:52.407 max_lcores : 128 00:01:52.407 tests : false 00:01:52.407 00:01:52.407 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.674 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.674 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.674 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.674 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.674 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.674 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.674 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.674 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.674 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.674 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.674 [10/268] Linking static target lib/librte_kvargs.a 00:01:52.674 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.674 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.674 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.674 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.674 [15/268] Linking static target lib/librte_log.a 00:01:52.933 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.510 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.510 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.510 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.510 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.510 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.510 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.510 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.510 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.510 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.510 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.510 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.510 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.510 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.510 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.510 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.510 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.510 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.510 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.510 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.510 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.510 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.510 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.510 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.510 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.510 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.510 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.773 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.773 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.773 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.773 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.773 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.773 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.773 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.773 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.773 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.773 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.773 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.773 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.773 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.773 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.773 [57/268] Linking static target lib/librte_telemetry.a 00:01:53.773 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.773 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.773 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.773 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.773 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.036 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.036 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.036 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.298 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.298 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.298 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.298 [69/268] Linking static target lib/librte_pci.a 00:01:54.298 [70/268] Linking target lib/librte_log.so.24.1 00:01:54.298 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.298 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.299 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.558 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.558 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:54.558 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.558 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:54.558 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.558 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:54.558 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.558 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.558 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.558 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.558 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.558 [85/268] Linking static target lib/librte_ring.a 00:01:54.558 [86/268] Linking target lib/librte_kvargs.so.24.1 00:01:54.558 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.558 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.558 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.558 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.558 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.558 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.558 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.558 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.558 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.558 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.558 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.558 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.558 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.558 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.558 [101/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.820 [102/268] Linking static target lib/librte_meter.a 00:01:54.820 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.820 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.820 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.820 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.820 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.820 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.820 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.820 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.820 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.820 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:54.820 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.820 [114/268] Linking static target lib/librte_mempool.a 00:01:54.820 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:54.820 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.820 [117/268] Linking static target lib/librte_rcu.a 00:01:54.820 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:54.820 [119/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:54.820 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:54.820 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.820 [122/268] Linking static target lib/librte_eal.a 00:01:54.820 [123/268] Linking target lib/librte_telemetry.so.24.1 00:01:54.820 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.820 [125/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.082 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.082 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.082 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.082 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.082 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.082 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.082 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.082 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.082 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.344 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.344 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.344 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.344 [138/268] Linking static target lib/librte_net.a 00:01:55.344 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.344 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.344 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.344 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.344 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.604 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.604 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.604 [146/268] Linking static target lib/librte_cmdline.a 00:01:55.604 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.604 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.604 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.604 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.604 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.604 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.604 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.604 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.863 [155/268] Linking static target lib/librte_timer.a 00:01:55.863 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.863 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.863 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.863 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.863 [160/268] Linking static target lib/librte_dmadev.a 00:01:55.863 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.863 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.863 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.863 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.863 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.863 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.863 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.122 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.122 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.122 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.122 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.122 [172/268] Linking static target lib/librte_compressdev.a 00:01:56.122 [173/268] Linking static target lib/librte_power.a 00:01:56.122 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.122 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.122 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.122 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.122 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.122 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:56.122 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.122 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:56.122 [182/268] Linking static target lib/librte_hash.a 00:01:56.379 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.380 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.380 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.380 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.380 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.380 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.380 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.380 [190/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.380 [191/268] Linking static target lib/librte_mbuf.a 00:01:56.380 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.380 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.380 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.380 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.380 [196/268] Linking static target lib/librte_reorder.a 00:01:56.652 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.652 [198/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.652 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.652 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.652 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.652 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.652 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.652 [204/268] Linking static target drivers/librte_bus_vdev.a 00:01:56.652 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.652 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.652 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.652 [208/268] Linking static target lib/librte_security.a 00:01:56.652 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.652 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.652 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.652 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:56.652 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.652 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.652 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.652 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.652 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.909 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.909 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.909 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.909 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.909 [222/268] Linking static target lib/librte_ethdev.a 00:01:57.167 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.167 [224/268] Linking static target lib/librte_cryptodev.a 00:01:57.167 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.167 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.100 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.371 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.371 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.371 [231/268] Linking target lib/librte_eal.so.24.1 00:02:01.371 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.371 [233/268] Linking target lib/librte_ring.so.24.1 00:02:01.371 [234/268] Linking target lib/librte_timer.so.24.1 00:02:01.371 [235/268] Linking target lib/librte_pci.so.24.1 00:02:01.371 [236/268] Linking target lib/librte_meter.so.24.1 00:02:01.371 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.371 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.372 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.372 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.372 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.372 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.372 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.372 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:01.372 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:01.372 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:01.629 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:01.629 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:01.629 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:01.629 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:01.629 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:01.886 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:01.887 [253/268] Linking target lib/librte_net.so.24.1 00:02:01.887 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:01.887 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:01.887 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.887 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.887 [258/268] Linking target lib/librte_security.so.24.1 00:02:01.887 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:01.887 [260/268] Linking target lib/librte_hash.so.24.1 00:02:01.887 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.144 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.144 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.144 [264/268] Linking target lib/librte_power.so.24.1 00:02:04.668 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.668 [266/268] Linking static target lib/librte_vhost.a 00:02:05.604 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.863 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:05.863 INFO: autodetecting backend as ninja 00:02:05.863 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:06.800 CC lib/ut_mock/mock.o 00:02:06.800 CC lib/log/log.o 00:02:06.800 CC lib/ut/ut.o 00:02:06.800 CC lib/log/log_flags.o 00:02:06.800 CC lib/log/log_deprecated.o 00:02:06.800 LIB libspdk_ut_mock.a 00:02:06.800 LIB libspdk_log.a 00:02:06.800 LIB libspdk_ut.a 00:02:06.800 SO libspdk_ut.so.2.0 00:02:06.800 SO libspdk_ut_mock.so.6.0 00:02:06.800 SO libspdk_log.so.7.0 00:02:07.059 SYMLINK libspdk_ut_mock.so 00:02:07.059 SYMLINK libspdk_ut.so 00:02:07.059 SYMLINK libspdk_log.so 00:02:07.059 CXX lib/trace_parser/trace.o 00:02:07.059 CC lib/ioat/ioat.o 00:02:07.059 CC lib/util/base64.o 00:02:07.059 CC lib/util/bit_array.o 00:02:07.059 CC lib/dma/dma.o 00:02:07.059 CC lib/util/cpuset.o 00:02:07.059 CC lib/util/crc16.o 00:02:07.059 CC lib/util/crc32.o 00:02:07.059 CC lib/util/crc32c.o 00:02:07.059 CC lib/util/crc32_ieee.o 00:02:07.059 CC lib/util/crc64.o 00:02:07.059 CC lib/util/dif.o 00:02:07.059 CC lib/util/fd.o 00:02:07.059 CC lib/util/fd_group.o 00:02:07.059 CC lib/util/file.o 00:02:07.059 CC lib/util/hexlify.o 00:02:07.059 CC lib/util/iov.o 00:02:07.059 CC lib/util/math.o 00:02:07.059 CC lib/util/net.o 00:02:07.059 CC lib/util/pipe.o 00:02:07.059 CC lib/util/strerror_tls.o 00:02:07.059 CC lib/util/string.o 00:02:07.059 CC lib/util/uuid.o 00:02:07.059 CC lib/util/zipf.o 00:02:07.059 CC lib/util/xor.o 00:02:07.317 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.317 CC lib/vfio_user/host/vfio_user.o 00:02:07.317 LIB libspdk_dma.a 00:02:07.317 LIB libspdk_ioat.a 00:02:07.586 SO libspdk_dma.so.4.0 00:02:07.586 SO libspdk_ioat.so.7.0 00:02:07.586 SYMLINK libspdk_dma.so 00:02:07.586 SYMLINK libspdk_ioat.so 00:02:07.586 LIB libspdk_vfio_user.a 00:02:07.586 SO libspdk_vfio_user.so.5.0 00:02:07.586 SYMLINK libspdk_vfio_user.so 00:02:07.586 LIB libspdk_util.a 00:02:07.930 SO libspdk_util.so.9.1 00:02:07.930 SYMLINK libspdk_util.so 00:02:08.187 CC lib/idxd/idxd.o 00:02:08.187 CC lib/json/json_parse.o 00:02:08.187 CC lib/rdma_utils/rdma_utils.o 00:02:08.187 CC lib/vmd/vmd.o 00:02:08.187 CC lib/env_dpdk/env.o 00:02:08.187 CC lib/idxd/idxd_user.o 00:02:08.187 CC lib/json/json_util.o 00:02:08.187 CC lib/vmd/led.o 00:02:08.187 CC lib/idxd/idxd_kernel.o 00:02:08.187 CC lib/env_dpdk/memory.o 00:02:08.187 CC lib/json/json_write.o 00:02:08.187 CC lib/env_dpdk/pci.o 00:02:08.187 CC lib/env_dpdk/init.o 00:02:08.187 CC lib/env_dpdk/threads.o 00:02:08.187 CC lib/conf/conf.o 00:02:08.187 CC lib/rdma_provider/common.o 00:02:08.187 CC lib/env_dpdk/pci_ioat.o 00:02:08.187 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.187 CC lib/env_dpdk/pci_virtio.o 00:02:08.187 CC lib/env_dpdk/pci_vmd.o 00:02:08.187 CC lib/env_dpdk/pci_idxd.o 00:02:08.187 CC lib/env_dpdk/pci_event.o 00:02:08.187 CC lib/env_dpdk/sigbus_handler.o 00:02:08.187 CC lib/env_dpdk/pci_dpdk.o 00:02:08.187 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.187 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.187 LIB libspdk_trace_parser.a 00:02:08.187 SO libspdk_trace_parser.so.5.0 00:02:08.187 SYMLINK libspdk_trace_parser.so 00:02:08.443 LIB libspdk_conf.a 00:02:08.443 SO libspdk_conf.so.6.0 00:02:08.443 LIB libspdk_rdma_provider.a 00:02:08.443 LIB libspdk_json.a 00:02:08.443 SO libspdk_rdma_provider.so.6.0 00:02:08.443 SYMLINK libspdk_conf.so 00:02:08.443 SO libspdk_json.so.6.0 00:02:08.443 SYMLINK libspdk_rdma_provider.so 00:02:08.443 LIB libspdk_rdma_utils.a 00:02:08.443 SO libspdk_rdma_utils.so.1.0 00:02:08.443 SYMLINK libspdk_json.so 00:02:08.443 SYMLINK libspdk_rdma_utils.so 00:02:08.700 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.700 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.700 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.700 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.700 LIB libspdk_idxd.a 00:02:08.700 SO libspdk_idxd.so.12.0 00:02:08.700 SYMLINK libspdk_idxd.so 00:02:08.700 LIB libspdk_vmd.a 00:02:08.700 SO libspdk_vmd.so.6.0 00:02:08.957 SYMLINK libspdk_vmd.so 00:02:08.957 LIB libspdk_jsonrpc.a 00:02:08.957 SO libspdk_jsonrpc.so.6.0 00:02:08.957 SYMLINK libspdk_jsonrpc.so 00:02:09.215 CC lib/rpc/rpc.o 00:02:09.472 LIB libspdk_rpc.a 00:02:09.472 SO libspdk_rpc.so.6.0 00:02:09.472 SYMLINK libspdk_rpc.so 00:02:09.729 CC lib/trace/trace.o 00:02:09.729 CC lib/notify/notify.o 00:02:09.729 CC lib/trace/trace_flags.o 00:02:09.729 CC lib/notify/notify_rpc.o 00:02:09.729 CC lib/trace/trace_rpc.o 00:02:09.729 CC lib/keyring/keyring.o 00:02:09.729 CC lib/keyring/keyring_rpc.o 00:02:09.729 LIB libspdk_notify.a 00:02:09.729 SO libspdk_notify.so.6.0 00:02:09.987 LIB libspdk_keyring.a 00:02:09.987 SYMLINK libspdk_notify.so 00:02:09.987 LIB libspdk_trace.a 00:02:09.987 SO libspdk_keyring.so.1.0 00:02:09.987 SO libspdk_trace.so.10.0 00:02:09.987 SYMLINK libspdk_keyring.so 00:02:09.987 SYMLINK libspdk_trace.so 00:02:09.987 LIB libspdk_env_dpdk.a 00:02:10.244 SO libspdk_env_dpdk.so.14.1 00:02:10.244 CC lib/thread/thread.o 00:02:10.244 CC lib/thread/iobuf.o 00:02:10.244 CC lib/sock/sock.o 00:02:10.244 CC lib/sock/sock_rpc.o 00:02:10.244 SYMLINK libspdk_env_dpdk.so 00:02:10.501 LIB libspdk_sock.a 00:02:10.501 SO libspdk_sock.so.10.0 00:02:10.759 SYMLINK libspdk_sock.so 00:02:10.759 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.759 CC lib/nvme/nvme_ctrlr.o 00:02:10.759 CC lib/nvme/nvme_fabric.o 00:02:10.759 CC lib/nvme/nvme_ns_cmd.o 00:02:10.759 CC lib/nvme/nvme_ns.o 00:02:10.759 CC lib/nvme/nvme_pcie_common.o 00:02:10.759 CC lib/nvme/nvme_pcie.o 00:02:10.759 CC lib/nvme/nvme_qpair.o 00:02:10.759 CC lib/nvme/nvme.o 00:02:10.759 CC lib/nvme/nvme_quirks.o 00:02:10.759 CC lib/nvme/nvme_transport.o 00:02:10.759 CC lib/nvme/nvme_discovery.o 00:02:10.759 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.759 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.759 CC lib/nvme/nvme_tcp.o 00:02:10.759 CC lib/nvme/nvme_opal.o 00:02:10.759 CC lib/nvme/nvme_io_msg.o 00:02:10.759 CC lib/nvme/nvme_poll_group.o 00:02:10.759 CC lib/nvme/nvme_zns.o 00:02:10.759 CC lib/nvme/nvme_stubs.o 00:02:10.759 CC lib/nvme/nvme_auth.o 00:02:10.759 CC lib/nvme/nvme_cuse.o 00:02:10.759 CC lib/nvme/nvme_vfio_user.o 00:02:10.759 CC lib/nvme/nvme_rdma.o 00:02:11.691 LIB libspdk_thread.a 00:02:11.691 SO libspdk_thread.so.10.1 00:02:11.949 SYMLINK libspdk_thread.so 00:02:11.949 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.949 CC lib/accel/accel.o 00:02:11.949 CC lib/blob/blobstore.o 00:02:11.949 CC lib/virtio/virtio.o 00:02:11.949 CC lib/blob/request.o 00:02:11.949 CC lib/init/json_config.o 00:02:11.949 CC lib/init/subsystem.o 00:02:11.949 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.949 CC lib/accel/accel_rpc.o 00:02:11.949 CC lib/blob/zeroes.o 00:02:11.949 CC lib/accel/accel_sw.o 00:02:11.949 CC lib/virtio/virtio_vhost_user.o 00:02:11.949 CC lib/blob/blob_bs_dev.o 00:02:11.949 CC lib/init/subsystem_rpc.o 00:02:11.949 CC lib/virtio/virtio_vfio_user.o 00:02:11.949 CC lib/init/rpc.o 00:02:11.949 CC lib/virtio/virtio_pci.o 00:02:12.207 LIB libspdk_init.a 00:02:12.207 SO libspdk_init.so.5.0 00:02:12.464 LIB libspdk_vfu_tgt.a 00:02:12.464 LIB libspdk_virtio.a 00:02:12.464 SYMLINK libspdk_init.so 00:02:12.464 SO libspdk_vfu_tgt.so.3.0 00:02:12.464 SO libspdk_virtio.so.7.0 00:02:12.464 SYMLINK libspdk_vfu_tgt.so 00:02:12.464 SYMLINK libspdk_virtio.so 00:02:12.464 CC lib/event/app.o 00:02:12.464 CC lib/event/reactor.o 00:02:12.464 CC lib/event/log_rpc.o 00:02:12.464 CC lib/event/app_rpc.o 00:02:12.464 CC lib/event/scheduler_static.o 00:02:13.030 LIB libspdk_event.a 00:02:13.030 SO libspdk_event.so.14.0 00:02:13.030 LIB libspdk_accel.a 00:02:13.030 SYMLINK libspdk_event.so 00:02:13.030 SO libspdk_accel.so.15.1 00:02:13.030 SYMLINK libspdk_accel.so 00:02:13.289 LIB libspdk_nvme.a 00:02:13.289 SO libspdk_nvme.so.13.1 00:02:13.289 CC lib/bdev/bdev.o 00:02:13.289 CC lib/bdev/bdev_rpc.o 00:02:13.289 CC lib/bdev/bdev_zone.o 00:02:13.289 CC lib/bdev/part.o 00:02:13.289 CC lib/bdev/scsi_nvme.o 00:02:13.547 SYMLINK libspdk_nvme.so 00:02:15.448 LIB libspdk_blob.a 00:02:15.448 SO libspdk_blob.so.11.0 00:02:15.448 SYMLINK libspdk_blob.so 00:02:15.448 CC lib/lvol/lvol.o 00:02:15.448 CC lib/blobfs/blobfs.o 00:02:15.448 CC lib/blobfs/tree.o 00:02:16.014 LIB libspdk_bdev.a 00:02:16.014 SO libspdk_bdev.so.15.1 00:02:16.014 SYMLINK libspdk_bdev.so 00:02:16.283 LIB libspdk_blobfs.a 00:02:16.283 SO libspdk_blobfs.so.10.0 00:02:16.283 CC lib/nbd/nbd.o 00:02:16.283 CC lib/ublk/ublk.o 00:02:16.283 CC lib/scsi/dev.o 00:02:16.283 CC lib/nbd/nbd_rpc.o 00:02:16.283 CC lib/ublk/ublk_rpc.o 00:02:16.283 CC lib/scsi/lun.o 00:02:16.283 CC lib/ftl/ftl_core.o 00:02:16.283 CC lib/scsi/port.o 00:02:16.283 CC lib/ftl/ftl_init.o 00:02:16.283 CC lib/scsi/scsi.o 00:02:16.283 CC lib/ftl/ftl_layout.o 00:02:16.283 CC lib/scsi/scsi_bdev.o 00:02:16.283 CC lib/ftl/ftl_debug.o 00:02:16.283 CC lib/scsi/scsi_pr.o 00:02:16.283 CC lib/ftl/ftl_io.o 00:02:16.283 CC lib/scsi/scsi_rpc.o 00:02:16.283 CC lib/scsi/task.o 00:02:16.283 CC lib/ftl/ftl_sb.o 00:02:16.283 CC lib/nvmf/ctrlr.o 00:02:16.283 CC lib/ftl/ftl_l2p.o 00:02:16.283 CC lib/ftl/ftl_l2p_flat.o 00:02:16.283 CC lib/nvmf/ctrlr_discovery.o 00:02:16.283 CC lib/ftl/ftl_nv_cache.o 00:02:16.283 CC lib/ftl/ftl_band.o 00:02:16.283 CC lib/nvmf/subsystem.o 00:02:16.283 CC lib/nvmf/ctrlr_bdev.o 00:02:16.283 CC lib/ftl/ftl_band_ops.o 00:02:16.283 CC lib/ftl/ftl_writer.o 00:02:16.283 CC lib/nvmf/nvmf.o 00:02:16.283 CC lib/ftl/ftl_rq.o 00:02:16.283 CC lib/nvmf/nvmf_rpc.o 00:02:16.283 CC lib/nvmf/transport.o 00:02:16.283 CC lib/ftl/ftl_reloc.o 00:02:16.283 CC lib/nvmf/tcp.o 00:02:16.283 CC lib/ftl/ftl_l2p_cache.o 00:02:16.283 CC lib/nvmf/stubs.o 00:02:16.283 CC lib/ftl/ftl_p2l.o 00:02:16.283 CC lib/nvmf/mdns_server.o 00:02:16.283 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.283 CC lib/nvmf/vfio_user.o 00:02:16.283 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.283 CC lib/nvmf/rdma.o 00:02:16.283 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.283 CC lib/nvmf/auth.o 00:02:16.283 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.283 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.283 SYMLINK libspdk_blobfs.so 00:02:16.283 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.283 LIB libspdk_lvol.a 00:02:16.559 SO libspdk_lvol.so.10.0 00:02:16.559 SYMLINK libspdk_lvol.so 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.559 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.817 CC lib/ftl/utils/ftl_conf.o 00:02:16.817 CC lib/ftl/utils/ftl_md.o 00:02:16.817 CC lib/ftl/utils/ftl_mempool.o 00:02:16.817 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.817 CC lib/ftl/utils/ftl_property.o 00:02:16.817 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.817 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.817 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.817 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.817 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.817 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.817 CC lib/ftl/base/ftl_base_dev.o 00:02:17.075 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.075 CC lib/ftl/ftl_trace.o 00:02:17.075 LIB libspdk_nbd.a 00:02:17.075 SO libspdk_nbd.so.7.0 00:02:17.075 SYMLINK libspdk_nbd.so 00:02:17.331 LIB libspdk_scsi.a 00:02:17.331 SO libspdk_scsi.so.9.0 00:02:17.331 SYMLINK libspdk_scsi.so 00:02:17.331 LIB libspdk_ublk.a 00:02:17.331 SO libspdk_ublk.so.3.0 00:02:17.589 SYMLINK libspdk_ublk.so 00:02:17.589 CC lib/vhost/vhost.o 00:02:17.589 CC lib/iscsi/conn.o 00:02:17.589 CC lib/iscsi/init_grp.o 00:02:17.589 CC lib/vhost/vhost_rpc.o 00:02:17.589 CC lib/iscsi/iscsi.o 00:02:17.589 CC lib/vhost/vhost_scsi.o 00:02:17.589 CC lib/iscsi/md5.o 00:02:17.589 CC lib/vhost/vhost_blk.o 00:02:17.589 CC lib/iscsi/param.o 00:02:17.589 CC lib/vhost/rte_vhost_user.o 00:02:17.589 CC lib/iscsi/portal_grp.o 00:02:17.589 CC lib/iscsi/tgt_node.o 00:02:17.589 CC lib/iscsi/iscsi_subsystem.o 00:02:17.589 CC lib/iscsi/iscsi_rpc.o 00:02:17.589 CC lib/iscsi/task.o 00:02:17.846 LIB libspdk_ftl.a 00:02:17.846 SO libspdk_ftl.so.9.0 00:02:18.409 SYMLINK libspdk_ftl.so 00:02:18.667 LIB libspdk_vhost.a 00:02:18.667 LIB libspdk_nvmf.a 00:02:18.667 SO libspdk_vhost.so.8.0 00:02:18.923 SO libspdk_nvmf.so.19.0 00:02:18.923 SYMLINK libspdk_vhost.so 00:02:18.923 LIB libspdk_iscsi.a 00:02:18.923 SO libspdk_iscsi.so.8.0 00:02:19.179 SYMLINK libspdk_nvmf.so 00:02:19.179 SYMLINK libspdk_iscsi.so 00:02:19.436 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.436 CC module/vfu_device/vfu_virtio.o 00:02:19.436 CC module/vfu_device/vfu_virtio_blk.o 00:02:19.436 CC module/vfu_device/vfu_virtio_scsi.o 00:02:19.436 CC module/vfu_device/vfu_virtio_rpc.o 00:02:19.436 CC module/sock/posix/posix.o 00:02:19.436 CC module/keyring/file/keyring.o 00:02:19.436 CC module/keyring/file/keyring_rpc.o 00:02:19.436 CC module/accel/ioat/accel_ioat.o 00:02:19.436 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.436 CC module/accel/error/accel_error.o 00:02:19.436 CC module/accel/error/accel_error_rpc.o 00:02:19.436 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.436 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.436 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.436 CC module/blob/bdev/blob_bdev.o 00:02:19.436 CC module/accel/dsa/accel_dsa.o 00:02:19.436 CC module/keyring/linux/keyring.o 00:02:19.436 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.436 CC module/accel/iaa/accel_iaa.o 00:02:19.436 CC module/keyring/linux/keyring_rpc.o 00:02:19.436 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.693 LIB libspdk_env_dpdk_rpc.a 00:02:19.693 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.693 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.693 LIB libspdk_keyring_file.a 00:02:19.693 LIB libspdk_keyring_linux.a 00:02:19.693 LIB libspdk_scheduler_gscheduler.a 00:02:19.693 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.693 SO libspdk_keyring_file.so.1.0 00:02:19.693 SO libspdk_keyring_linux.so.1.0 00:02:19.693 SO libspdk_scheduler_gscheduler.so.4.0 00:02:19.693 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:19.693 LIB libspdk_accel_ioat.a 00:02:19.693 LIB libspdk_accel_error.a 00:02:19.693 LIB libspdk_scheduler_dynamic.a 00:02:19.693 LIB libspdk_accel_iaa.a 00:02:19.693 SO libspdk_accel_error.so.2.0 00:02:19.693 SO libspdk_accel_ioat.so.6.0 00:02:19.693 SYMLINK libspdk_keyring_file.so 00:02:19.693 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.693 SO libspdk_scheduler_dynamic.so.4.0 00:02:19.693 SYMLINK libspdk_keyring_linux.so 00:02:19.693 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.693 SO libspdk_accel_iaa.so.3.0 00:02:19.693 LIB libspdk_accel_dsa.a 00:02:19.693 SYMLINK libspdk_accel_ioat.so 00:02:19.693 SYMLINK libspdk_accel_error.so 00:02:19.950 LIB libspdk_blob_bdev.a 00:02:19.950 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.950 SO libspdk_accel_dsa.so.5.0 00:02:19.950 SO libspdk_blob_bdev.so.11.0 00:02:19.950 SYMLINK libspdk_accel_iaa.so 00:02:19.950 SYMLINK libspdk_blob_bdev.so 00:02:19.950 SYMLINK libspdk_accel_dsa.so 00:02:20.208 LIB libspdk_vfu_device.a 00:02:20.208 SO libspdk_vfu_device.so.3.0 00:02:20.208 CC module/bdev/error/vbdev_error.o 00:02:20.208 CC module/bdev/malloc/bdev_malloc.o 00:02:20.208 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.208 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.208 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.208 CC module/bdev/gpt/gpt.o 00:02:20.208 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.208 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.208 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.208 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.208 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.208 CC module/bdev/raid/bdev_raid.o 00:02:20.208 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.208 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.208 CC module/bdev/delay/vbdev_delay.o 00:02:20.208 CC module/bdev/split/vbdev_split.o 00:02:20.208 CC module/bdev/raid/raid0.o 00:02:20.208 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.208 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.208 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.208 CC module/bdev/raid/raid1.o 00:02:20.208 CC module/bdev/null/bdev_null.o 00:02:20.208 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.208 CC module/bdev/nvme/bdev_nvme.o 00:02:20.208 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.208 CC module/bdev/raid/concat.o 00:02:20.208 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.208 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.208 CC module/bdev/aio/bdev_aio.o 00:02:20.208 CC module/bdev/ftl/bdev_ftl.o 00:02:20.208 CC module/bdev/null/bdev_null_rpc.o 00:02:20.208 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.208 CC module/bdev/nvme/nvme_rpc.o 00:02:20.208 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.208 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.208 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.208 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.208 CC module/bdev/nvme/vbdev_opal.o 00:02:20.208 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.208 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.208 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.208 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.208 SYMLINK libspdk_vfu_device.so 00:02:20.466 LIB libspdk_sock_posix.a 00:02:20.466 SO libspdk_sock_posix.so.6.0 00:02:20.466 LIB libspdk_bdev_null.a 00:02:20.466 LIB libspdk_blobfs_bdev.a 00:02:20.466 LIB libspdk_bdev_split.a 00:02:20.466 SYMLINK libspdk_sock_posix.so 00:02:20.466 SO libspdk_bdev_null.so.6.0 00:02:20.466 LIB libspdk_bdev_error.a 00:02:20.466 SO libspdk_blobfs_bdev.so.6.0 00:02:20.466 SO libspdk_bdev_split.so.6.0 00:02:20.466 SO libspdk_bdev_error.so.6.0 00:02:20.724 SYMLINK libspdk_bdev_null.so 00:02:20.724 SYMLINK libspdk_blobfs_bdev.so 00:02:20.724 SYMLINK libspdk_bdev_split.so 00:02:20.724 LIB libspdk_bdev_ftl.a 00:02:20.724 LIB libspdk_bdev_passthru.a 00:02:20.724 SYMLINK libspdk_bdev_error.so 00:02:20.724 SO libspdk_bdev_passthru.so.6.0 00:02:20.724 SO libspdk_bdev_ftl.so.6.0 00:02:20.724 LIB libspdk_bdev_gpt.a 00:02:20.724 LIB libspdk_bdev_zone_block.a 00:02:20.724 SO libspdk_bdev_gpt.so.6.0 00:02:20.724 SO libspdk_bdev_zone_block.so.6.0 00:02:20.724 SYMLINK libspdk_bdev_passthru.so 00:02:20.724 SYMLINK libspdk_bdev_ftl.so 00:02:20.724 LIB libspdk_bdev_aio.a 00:02:20.724 SYMLINK libspdk_bdev_gpt.so 00:02:20.724 LIB libspdk_bdev_malloc.a 00:02:20.724 SO libspdk_bdev_aio.so.6.0 00:02:20.724 SYMLINK libspdk_bdev_zone_block.so 00:02:20.724 LIB libspdk_bdev_iscsi.a 00:02:20.724 SO libspdk_bdev_malloc.so.6.0 00:02:20.724 LIB libspdk_bdev_delay.a 00:02:20.724 SO libspdk_bdev_iscsi.so.6.0 00:02:20.724 SO libspdk_bdev_delay.so.6.0 00:02:20.724 SYMLINK libspdk_bdev_aio.so 00:02:20.982 SYMLINK libspdk_bdev_malloc.so 00:02:20.982 SYMLINK libspdk_bdev_iscsi.so 00:02:20.982 SYMLINK libspdk_bdev_delay.so 00:02:20.982 LIB libspdk_bdev_lvol.a 00:02:20.982 LIB libspdk_bdev_virtio.a 00:02:20.982 SO libspdk_bdev_lvol.so.6.0 00:02:20.982 SO libspdk_bdev_virtio.so.6.0 00:02:20.982 SYMLINK libspdk_bdev_lvol.so 00:02:20.982 SYMLINK libspdk_bdev_virtio.so 00:02:21.239 LIB libspdk_bdev_raid.a 00:02:21.497 SO libspdk_bdev_raid.so.6.0 00:02:21.497 SYMLINK libspdk_bdev_raid.so 00:02:22.431 LIB libspdk_bdev_nvme.a 00:02:22.690 SO libspdk_bdev_nvme.so.7.0 00:02:22.690 SYMLINK libspdk_bdev_nvme.so 00:02:22.948 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.948 CC module/event/subsystems/vmd/vmd.o 00:02:22.948 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.948 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.948 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:22.948 CC module/event/subsystems/keyring/keyring.o 00:02:22.948 CC module/event/subsystems/sock/sock.o 00:02:22.948 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.948 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.206 LIB libspdk_event_keyring.a 00:02:23.206 LIB libspdk_event_sock.a 00:02:23.206 LIB libspdk_event_scheduler.a 00:02:23.206 LIB libspdk_event_vhost_blk.a 00:02:23.206 LIB libspdk_event_vmd.a 00:02:23.206 LIB libspdk_event_vfu_tgt.a 00:02:23.206 SO libspdk_event_keyring.so.1.0 00:02:23.206 LIB libspdk_event_iobuf.a 00:02:23.206 SO libspdk_event_sock.so.5.0 00:02:23.206 SO libspdk_event_scheduler.so.4.0 00:02:23.206 SO libspdk_event_vhost_blk.so.3.0 00:02:23.207 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.207 SO libspdk_event_vmd.so.6.0 00:02:23.207 SO libspdk_event_iobuf.so.3.0 00:02:23.207 SYMLINK libspdk_event_keyring.so 00:02:23.207 SYMLINK libspdk_event_sock.so 00:02:23.207 SYMLINK libspdk_event_vhost_blk.so 00:02:23.207 SYMLINK libspdk_event_scheduler.so 00:02:23.207 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.207 SYMLINK libspdk_event_vmd.so 00:02:23.207 SYMLINK libspdk_event_iobuf.so 00:02:23.465 CC module/event/subsystems/accel/accel.o 00:02:23.724 LIB libspdk_event_accel.a 00:02:23.724 SO libspdk_event_accel.so.6.0 00:02:23.724 SYMLINK libspdk_event_accel.so 00:02:23.982 CC module/event/subsystems/bdev/bdev.o 00:02:23.982 LIB libspdk_event_bdev.a 00:02:23.982 SO libspdk_event_bdev.so.6.0 00:02:24.241 SYMLINK libspdk_event_bdev.so 00:02:24.241 CC module/event/subsystems/scsi/scsi.o 00:02:24.241 CC module/event/subsystems/ublk/ublk.o 00:02:24.241 CC module/event/subsystems/nbd/nbd.o 00:02:24.241 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.241 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.500 LIB libspdk_event_nbd.a 00:02:24.500 LIB libspdk_event_ublk.a 00:02:24.500 LIB libspdk_event_scsi.a 00:02:24.500 SO libspdk_event_ublk.so.3.0 00:02:24.500 SO libspdk_event_nbd.so.6.0 00:02:24.500 SO libspdk_event_scsi.so.6.0 00:02:24.500 SYMLINK libspdk_event_nbd.so 00:02:24.500 SYMLINK libspdk_event_ublk.so 00:02:24.500 SYMLINK libspdk_event_scsi.so 00:02:24.500 LIB libspdk_event_nvmf.a 00:02:24.500 SO libspdk_event_nvmf.so.6.0 00:02:24.500 SYMLINK libspdk_event_nvmf.so 00:02:24.758 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.758 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.758 LIB libspdk_event_vhost_scsi.a 00:02:24.758 LIB libspdk_event_iscsi.a 00:02:24.758 SO libspdk_event_vhost_scsi.so.3.0 00:02:24.758 SO libspdk_event_iscsi.so.6.0 00:02:25.040 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.040 SYMLINK libspdk_event_iscsi.so 00:02:25.040 SO libspdk.so.6.0 00:02:25.040 SYMLINK libspdk.so 00:02:25.303 CXX app/trace/trace.o 00:02:25.303 CC app/trace_record/trace_record.o 00:02:25.303 CC app/spdk_top/spdk_top.o 00:02:25.303 CC app/spdk_lspci/spdk_lspci.o 00:02:25.303 CC app/spdk_nvme_identify/identify.o 00:02:25.303 TEST_HEADER include/spdk/accel.h 00:02:25.303 TEST_HEADER include/spdk/accel_module.h 00:02:25.303 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.303 TEST_HEADER include/spdk/assert.h 00:02:25.303 CC test/rpc_client/rpc_client_test.o 00:02:25.303 CC app/spdk_nvme_perf/perf.o 00:02:25.303 TEST_HEADER include/spdk/barrier.h 00:02:25.303 TEST_HEADER include/spdk/base64.h 00:02:25.303 TEST_HEADER include/spdk/bdev.h 00:02:25.303 TEST_HEADER include/spdk/bdev_module.h 00:02:25.303 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.303 TEST_HEADER include/spdk/bit_array.h 00:02:25.303 TEST_HEADER include/spdk/bit_pool.h 00:02:25.303 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.303 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.303 TEST_HEADER include/spdk/blobfs.h 00:02:25.303 TEST_HEADER include/spdk/blob.h 00:02:25.303 TEST_HEADER include/spdk/conf.h 00:02:25.303 TEST_HEADER include/spdk/config.h 00:02:25.303 TEST_HEADER include/spdk/cpuset.h 00:02:25.303 TEST_HEADER include/spdk/crc16.h 00:02:25.303 TEST_HEADER include/spdk/crc32.h 00:02:25.303 TEST_HEADER include/spdk/dif.h 00:02:25.303 TEST_HEADER include/spdk/crc64.h 00:02:25.303 TEST_HEADER include/spdk/dma.h 00:02:25.303 TEST_HEADER include/spdk/endian.h 00:02:25.303 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.303 TEST_HEADER include/spdk/env.h 00:02:25.303 TEST_HEADER include/spdk/event.h 00:02:25.303 TEST_HEADER include/spdk/fd_group.h 00:02:25.303 TEST_HEADER include/spdk/fd.h 00:02:25.303 TEST_HEADER include/spdk/file.h 00:02:25.303 TEST_HEADER include/spdk/ftl.h 00:02:25.303 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.303 TEST_HEADER include/spdk/hexlify.h 00:02:25.303 TEST_HEADER include/spdk/histogram_data.h 00:02:25.303 TEST_HEADER include/spdk/idxd.h 00:02:25.303 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.303 TEST_HEADER include/spdk/init.h 00:02:25.303 TEST_HEADER include/spdk/ioat.h 00:02:25.303 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.303 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.303 TEST_HEADER include/spdk/json.h 00:02:25.303 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.303 TEST_HEADER include/spdk/keyring.h 00:02:25.303 TEST_HEADER include/spdk/keyring_module.h 00:02:25.303 TEST_HEADER include/spdk/likely.h 00:02:25.303 TEST_HEADER include/spdk/log.h 00:02:25.303 TEST_HEADER include/spdk/memory.h 00:02:25.303 TEST_HEADER include/spdk/lvol.h 00:02:25.303 TEST_HEADER include/spdk/nbd.h 00:02:25.303 TEST_HEADER include/spdk/mmio.h 00:02:25.303 TEST_HEADER include/spdk/net.h 00:02:25.303 TEST_HEADER include/spdk/nvme.h 00:02:25.303 TEST_HEADER include/spdk/notify.h 00:02:25.304 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.304 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.304 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.304 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.304 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.304 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.304 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.304 TEST_HEADER include/spdk/nvmf.h 00:02:25.304 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.304 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.304 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.304 TEST_HEADER include/spdk/opal.h 00:02:25.304 TEST_HEADER include/spdk/opal_spec.h 00:02:25.304 TEST_HEADER include/spdk/pci_ids.h 00:02:25.304 TEST_HEADER include/spdk/pipe.h 00:02:25.304 TEST_HEADER include/spdk/queue.h 00:02:25.304 TEST_HEADER include/spdk/reduce.h 00:02:25.304 TEST_HEADER include/spdk/scheduler.h 00:02:25.304 TEST_HEADER include/spdk/rpc.h 00:02:25.304 TEST_HEADER include/spdk/scsi.h 00:02:25.304 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.304 TEST_HEADER include/spdk/sock.h 00:02:25.304 TEST_HEADER include/spdk/stdinc.h 00:02:25.304 TEST_HEADER include/spdk/string.h 00:02:25.304 TEST_HEADER include/spdk/thread.h 00:02:25.304 TEST_HEADER include/spdk/trace.h 00:02:25.304 TEST_HEADER include/spdk/trace_parser.h 00:02:25.304 TEST_HEADER include/spdk/tree.h 00:02:25.304 TEST_HEADER include/spdk/ublk.h 00:02:25.304 TEST_HEADER include/spdk/uuid.h 00:02:25.304 TEST_HEADER include/spdk/util.h 00:02:25.304 TEST_HEADER include/spdk/version.h 00:02:25.304 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.304 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.304 CC app/spdk_dd/spdk_dd.o 00:02:25.304 TEST_HEADER include/spdk/vhost.h 00:02:25.304 TEST_HEADER include/spdk/vmd.h 00:02:25.304 TEST_HEADER include/spdk/xor.h 00:02:25.304 TEST_HEADER include/spdk/zipf.h 00:02:25.304 CXX test/cpp_headers/accel.o 00:02:25.304 CXX test/cpp_headers/accel_module.o 00:02:25.304 CXX test/cpp_headers/assert.o 00:02:25.304 CXX test/cpp_headers/barrier.o 00:02:25.304 CXX test/cpp_headers/bdev.o 00:02:25.304 CXX test/cpp_headers/base64.o 00:02:25.304 CXX test/cpp_headers/bdev_module.o 00:02:25.304 CXX test/cpp_headers/bdev_zone.o 00:02:25.304 CXX test/cpp_headers/bit_array.o 00:02:25.304 CXX test/cpp_headers/bit_pool.o 00:02:25.304 CXX test/cpp_headers/blob_bdev.o 00:02:25.304 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.304 CXX test/cpp_headers/blobfs.o 00:02:25.304 CXX test/cpp_headers/blob.o 00:02:25.304 CXX test/cpp_headers/conf.o 00:02:25.304 CXX test/cpp_headers/config.o 00:02:25.304 CXX test/cpp_headers/cpuset.o 00:02:25.304 CXX test/cpp_headers/crc16.o 00:02:25.304 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.304 CC app/nvmf_tgt/nvmf_main.o 00:02:25.304 CXX test/cpp_headers/crc32.o 00:02:25.304 CC app/spdk_tgt/spdk_tgt.o 00:02:25.304 CC examples/util/zipf/zipf.o 00:02:25.304 CC examples/ioat/verify/verify.o 00:02:25.304 CC test/thread/poller_perf/poller_perf.o 00:02:25.304 CC test/env/pci/pci_ut.o 00:02:25.304 CC examples/ioat/perf/perf.o 00:02:25.304 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.304 CC test/app/jsoncat/jsoncat.o 00:02:25.304 CC test/env/vtophys/vtophys.o 00:02:25.304 CC test/app/histogram_perf/histogram_perf.o 00:02:25.304 CC test/env/memory/memory_ut.o 00:02:25.304 CC app/fio/nvme/fio_plugin.o 00:02:25.304 CC test/app/stub/stub.o 00:02:25.571 CC test/dma/test_dma/test_dma.o 00:02:25.571 CC test/app/bdev_svc/bdev_svc.o 00:02:25.571 CC app/fio/bdev/fio_plugin.o 00:02:25.571 LINK spdk_lspci 00:02:25.571 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.571 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.571 LINK spdk_nvme_discover 00:02:25.571 LINK rpc_client_test 00:02:25.571 LINK jsoncat 00:02:25.571 LINK zipf 00:02:25.571 LINK poller_perf 00:02:25.838 LINK interrupt_tgt 00:02:25.838 LINK vtophys 00:02:25.838 LINK histogram_perf 00:02:25.838 CXX test/cpp_headers/crc64.o 00:02:25.838 LINK nvmf_tgt 00:02:25.838 CXX test/cpp_headers/dif.o 00:02:25.838 LINK spdk_trace_record 00:02:25.838 CXX test/cpp_headers/dma.o 00:02:25.838 CXX test/cpp_headers/endian.o 00:02:25.838 CXX test/cpp_headers/env_dpdk.o 00:02:25.838 CXX test/cpp_headers/env.o 00:02:25.838 CXX test/cpp_headers/event.o 00:02:25.838 LINK env_dpdk_post_init 00:02:25.838 LINK iscsi_tgt 00:02:25.838 CXX test/cpp_headers/fd.o 00:02:25.838 CXX test/cpp_headers/fd_group.o 00:02:25.838 CXX test/cpp_headers/file.o 00:02:25.838 CXX test/cpp_headers/ftl.o 00:02:25.838 CXX test/cpp_headers/gpt_spec.o 00:02:25.838 CXX test/cpp_headers/hexlify.o 00:02:25.838 LINK spdk_tgt 00:02:25.838 CXX test/cpp_headers/histogram_data.o 00:02:25.838 LINK verify 00:02:25.838 LINK stub 00:02:25.838 LINK ioat_perf 00:02:25.838 CXX test/cpp_headers/idxd.o 00:02:25.838 LINK bdev_svc 00:02:25.838 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.838 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.838 CXX test/cpp_headers/idxd_spec.o 00:02:25.838 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.099 CXX test/cpp_headers/init.o 00:02:26.099 CXX test/cpp_headers/ioat.o 00:02:26.099 CXX test/cpp_headers/ioat_spec.o 00:02:26.099 CXX test/cpp_headers/iscsi_spec.o 00:02:26.099 LINK spdk_dd 00:02:26.099 LINK spdk_trace 00:02:26.099 CXX test/cpp_headers/json.o 00:02:26.099 CXX test/cpp_headers/jsonrpc.o 00:02:26.099 CXX test/cpp_headers/keyring.o 00:02:26.099 CXX test/cpp_headers/keyring_module.o 00:02:26.099 CXX test/cpp_headers/likely.o 00:02:26.099 CXX test/cpp_headers/log.o 00:02:26.099 CXX test/cpp_headers/lvol.o 00:02:26.099 CXX test/cpp_headers/memory.o 00:02:26.099 CXX test/cpp_headers/mmio.o 00:02:26.099 CXX test/cpp_headers/nbd.o 00:02:26.099 CXX test/cpp_headers/net.o 00:02:26.099 CXX test/cpp_headers/notify.o 00:02:26.099 LINK pci_ut 00:02:26.099 CXX test/cpp_headers/nvme.o 00:02:26.099 CXX test/cpp_headers/nvme_intel.o 00:02:26.099 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.099 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.099 CXX test/cpp_headers/nvme_spec.o 00:02:26.099 CXX test/cpp_headers/nvme_zns.o 00:02:26.099 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.367 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.367 LINK test_dma 00:02:26.367 CXX test/cpp_headers/nvmf.o 00:02:26.367 CXX test/cpp_headers/nvmf_spec.o 00:02:26.367 CXX test/cpp_headers/nvmf_transport.o 00:02:26.367 LINK nvme_fuzz 00:02:26.367 CXX test/cpp_headers/opal.o 00:02:26.367 CXX test/cpp_headers/opal_spec.o 00:02:26.367 CC test/event/event_perf/event_perf.o 00:02:26.367 CC test/event/reactor/reactor.o 00:02:26.367 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.367 CC examples/sock/hello_world/hello_sock.o 00:02:26.367 CC examples/idxd/perf/perf.o 00:02:26.367 CC test/event/reactor_perf/reactor_perf.o 00:02:26.626 CC examples/thread/thread/thread_ex.o 00:02:26.626 LINK spdk_bdev 00:02:26.626 CXX test/cpp_headers/pci_ids.o 00:02:26.626 CXX test/cpp_headers/pipe.o 00:02:26.626 CC test/event/app_repeat/app_repeat.o 00:02:26.626 LINK spdk_nvme 00:02:26.626 CXX test/cpp_headers/queue.o 00:02:26.626 CXX test/cpp_headers/reduce.o 00:02:26.626 CC examples/vmd/led/led.o 00:02:26.626 CXX test/cpp_headers/rpc.o 00:02:26.626 CXX test/cpp_headers/scheduler.o 00:02:26.626 CXX test/cpp_headers/scsi.o 00:02:26.626 CXX test/cpp_headers/scsi_spec.o 00:02:26.626 CXX test/cpp_headers/sock.o 00:02:26.626 CXX test/cpp_headers/stdinc.o 00:02:26.626 CXX test/cpp_headers/string.o 00:02:26.626 CXX test/cpp_headers/thread.o 00:02:26.626 CXX test/cpp_headers/trace.o 00:02:26.626 CXX test/cpp_headers/trace_parser.o 00:02:26.626 CC test/event/scheduler/scheduler.o 00:02:26.626 CXX test/cpp_headers/tree.o 00:02:26.626 CXX test/cpp_headers/ublk.o 00:02:26.626 CXX test/cpp_headers/util.o 00:02:26.626 CXX test/cpp_headers/uuid.o 00:02:26.626 CXX test/cpp_headers/version.o 00:02:26.626 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.626 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.626 CXX test/cpp_headers/vhost.o 00:02:26.626 CC app/vhost/vhost.o 00:02:26.626 LINK reactor 00:02:26.626 LINK vhost_fuzz 00:02:26.626 CXX test/cpp_headers/vmd.o 00:02:26.626 CXX test/cpp_headers/xor.o 00:02:26.626 CXX test/cpp_headers/zipf.o 00:02:26.626 LINK lsvmd 00:02:26.890 LINK event_perf 00:02:26.890 LINK reactor_perf 00:02:26.890 LINK mem_callbacks 00:02:26.890 LINK spdk_nvme_perf 00:02:26.890 LINK app_repeat 00:02:26.890 LINK led 00:02:26.890 LINK spdk_top 00:02:26.890 LINK spdk_nvme_identify 00:02:26.890 LINK hello_sock 00:02:27.150 LINK thread 00:02:27.150 CC test/nvme/reset/reset.o 00:02:27.150 CC test/nvme/sgl/sgl.o 00:02:27.150 CC test/nvme/overhead/overhead.o 00:02:27.150 CC test/nvme/aer/aer.o 00:02:27.150 CC test/nvme/e2edp/nvme_dp.o 00:02:27.150 CC test/nvme/err_injection/err_injection.o 00:02:27.150 CC test/nvme/startup/startup.o 00:02:27.150 CC test/blobfs/mkfs/mkfs.o 00:02:27.150 CC test/nvme/reserve/reserve.o 00:02:27.150 CC test/accel/dif/dif.o 00:02:27.150 LINK scheduler 00:02:27.150 CC test/nvme/simple_copy/simple_copy.o 00:02:27.150 LINK vhost 00:02:27.150 CC test/nvme/compliance/nvme_compliance.o 00:02:27.150 LINK idxd_perf 00:02:27.150 CC test/nvme/connect_stress/connect_stress.o 00:02:27.150 CC test/nvme/boot_partition/boot_partition.o 00:02:27.150 CC test/lvol/esnap/esnap.o 00:02:27.150 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.150 CC test/nvme/fdp/fdp.o 00:02:27.150 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.150 CC test/nvme/cuse/cuse.o 00:02:27.408 LINK mkfs 00:02:27.408 LINK startup 00:02:27.408 LINK boot_partition 00:02:27.408 LINK reset 00:02:27.408 LINK sgl 00:02:27.408 LINK nvme_dp 00:02:27.408 LINK connect_stress 00:02:27.408 LINK fused_ordering 00:02:27.408 LINK err_injection 00:02:27.408 LINK aer 00:02:27.408 LINK simple_copy 00:02:27.408 LINK doorbell_aers 00:02:27.408 LINK reserve 00:02:27.408 LINK memory_ut 00:02:27.408 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.408 CC examples/nvme/abort/abort.o 00:02:27.408 CC examples/nvme/reconnect/reconnect.o 00:02:27.408 CC examples/nvme/hello_world/hello_world.o 00:02:27.408 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.408 CC examples/nvme/arbitration/arbitration.o 00:02:27.408 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.408 CC examples/nvme/hotplug/hotplug.o 00:02:27.408 LINK nvme_compliance 00:02:27.666 LINK overhead 00:02:27.666 CC examples/accel/perf/accel_perf.o 00:02:27.666 CC examples/blob/cli/blobcli.o 00:02:27.666 LINK fdp 00:02:27.666 CC examples/blob/hello_world/hello_blob.o 00:02:27.666 LINK dif 00:02:27.666 LINK cmb_copy 00:02:27.924 LINK pmr_persistence 00:02:27.924 LINK hello_world 00:02:27.924 LINK hotplug 00:02:27.924 LINK arbitration 00:02:27.924 LINK hello_blob 00:02:27.924 LINK abort 00:02:27.924 LINK reconnect 00:02:28.182 CC test/bdev/bdevio/bdevio.o 00:02:28.182 LINK accel_perf 00:02:28.182 LINK blobcli 00:02:28.182 LINK nvme_manage 00:02:28.440 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.440 LINK iscsi_fuzz 00:02:28.440 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.440 LINK bdevio 00:02:28.698 LINK cuse 00:02:28.698 LINK hello_bdev 00:02:29.264 LINK bdevperf 00:02:29.521 CC examples/nvmf/nvmf/nvmf.o 00:02:29.779 LINK nvmf 00:02:32.307 LINK esnap 00:02:32.574 00:02:32.574 real 0m49.501s 00:02:32.574 user 10m12.598s 00:02:32.574 sys 2m29.768s 00:02:32.574 23:29:07 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:32.574 23:29:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.574 ************************************ 00:02:32.574 END TEST make 00:02:32.574 ************************************ 00:02:32.574 23:29:07 -- common/autotest_common.sh@1142 -- $ return 0 00:02:32.574 23:29:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.574 23:29:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.574 23:29:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.574 23:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.574 23:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.574 23:29:07 -- pm/common@44 -- $ pid=3574500 00:02:32.574 23:29:07 -- pm/common@50 -- $ kill -TERM 3574500 00:02:32.574 23:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.574 23:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.574 23:29:07 -- pm/common@44 -- $ pid=3574502 00:02:32.574 23:29:07 -- pm/common@50 -- $ kill -TERM 3574502 00:02:32.574 23:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.574 23:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.574 23:29:07 -- pm/common@44 -- $ pid=3574504 00:02:32.574 23:29:07 -- pm/common@50 -- $ kill -TERM 3574504 00:02:32.574 23:29:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.574 23:29:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.574 23:29:07 -- pm/common@44 -- $ pid=3574531 00:02:32.574 23:29:07 -- pm/common@50 -- $ sudo -E kill -TERM 3574531 00:02:32.574 23:29:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.574 23:29:07 -- nvmf/common.sh@7 -- # uname -s 00:02:32.574 23:29:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.574 23:29:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.574 23:29:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.574 23:29:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.574 23:29:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.574 23:29:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.574 23:29:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.574 23:29:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.574 23:29:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.574 23:29:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.574 23:29:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:32.574 23:29:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:32.574 23:29:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.574 23:29:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.574 23:29:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.574 23:29:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.574 23:29:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.574 23:29:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.575 23:29:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.575 23:29:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.575 23:29:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.575 23:29:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.575 23:29:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.575 23:29:07 -- paths/export.sh@5 -- # export PATH 00:02:32.575 23:29:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.575 23:29:07 -- nvmf/common.sh@47 -- # : 0 00:02:32.575 23:29:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.575 23:29:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.575 23:29:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.575 23:29:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.575 23:29:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.575 23:29:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.575 23:29:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.575 23:29:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.575 23:29:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.575 23:29:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.575 23:29:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.575 23:29:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.575 23:29:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.575 23:29:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.575 23:29:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.575 23:29:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.575 23:29:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.575 23:29:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.575 23:29:07 -- spdk/autotest.sh@48 -- # udevadm_pid=3630013 00:02:32.575 23:29:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.575 23:29:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.575 23:29:07 -- pm/common@17 -- # local monitor 00:02:32.575 23:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.575 23:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.575 23:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.575 23:29:07 -- pm/common@21 -- # date +%s 00:02:32.575 23:29:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.575 23:29:07 -- pm/common@21 -- # date +%s 00:02:32.575 23:29:07 -- pm/common@25 -- # sleep 1 00:02:32.575 23:29:07 -- pm/common@21 -- # date +%s 00:02:32.575 23:29:07 -- pm/common@21 -- # date +%s 00:02:32.575 23:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078947 00:02:32.575 23:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078947 00:02:32.575 23:29:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078947 00:02:32.575 23:29:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078947 00:02:32.575 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078947_collect-vmstat.pm.log 00:02:32.575 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078947_collect-cpu-load.pm.log 00:02:32.575 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078947_collect-cpu-temp.pm.log 00:02:32.575 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078947_collect-bmc-pm.bmc.pm.log 00:02:33.951 23:29:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.951 23:29:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.951 23:29:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.951 23:29:08 -- common/autotest_common.sh@10 -- # set +x 00:02:33.951 23:29:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.951 23:29:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:33.951 23:29:08 -- common/autotest_common.sh@10 -- # set +x 00:02:33.951 23:29:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.951 23:29:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.951 23:29:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.951 23:29:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.951 23:29:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.951 23:29:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.951 23:29:08 -- common/autotest_common.sh@1455 -- # uname 00:02:33.951 23:29:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:33.951 23:29:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.951 23:29:08 -- common/autotest_common.sh@1475 -- # uname 00:02:33.951 23:29:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:33.951 23:29:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.951 23:29:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.951 23:29:08 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.951 23:29:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.951 23:29:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.951 --rc lcov_branch_coverage=1 00:02:33.951 --rc lcov_function_coverage=1 00:02:33.951 --rc genhtml_branch_coverage=1 00:02:33.951 --rc genhtml_function_coverage=1 00:02:33.951 --rc genhtml_legend=1 00:02:33.951 --rc geninfo_all_blocks=1 00:02:33.951 ' 00:02:33.951 23:29:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.951 --rc lcov_branch_coverage=1 00:02:33.951 --rc lcov_function_coverage=1 00:02:33.951 --rc genhtml_branch_coverage=1 00:02:33.951 --rc genhtml_function_coverage=1 00:02:33.951 --rc genhtml_legend=1 00:02:33.951 --rc geninfo_all_blocks=1 00:02:33.951 ' 00:02:33.951 23:29:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.951 --rc lcov_branch_coverage=1 00:02:33.951 --rc lcov_function_coverage=1 00:02:33.951 --rc genhtml_branch_coverage=1 00:02:33.951 --rc genhtml_function_coverage=1 00:02:33.951 --rc genhtml_legend=1 00:02:33.951 --rc geninfo_all_blocks=1 00:02:33.951 --no-external' 00:02:33.951 23:29:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.951 --rc lcov_branch_coverage=1 00:02:33.951 --rc lcov_function_coverage=1 00:02:33.951 --rc genhtml_branch_coverage=1 00:02:33.951 --rc genhtml_function_coverage=1 00:02:33.951 --rc genhtml_legend=1 00:02:33.951 --rc geninfo_all_blocks=1 00:02:33.951 --no-external' 00:02:33.951 23:29:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.951 lcov: LCOV version 1.14 00:02:33.951 23:29:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:48.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:48.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:03.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:03.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:03.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:03.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:03.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:03.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:03.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:03.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:03.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:07.008 23:29:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:07.008 23:29:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:07.008 23:29:42 -- common/autotest_common.sh@10 -- # set +x 00:03:07.008 23:29:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:07.008 23:29:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.383 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:08.383 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:08.383 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:08.383 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:08.383 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:08.383 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:08.383 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:08.383 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:08.383 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:08.383 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:08.383 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:08.383 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:08.383 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:08.383 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:08.383 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:08.383 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:08.383 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:08.642 23:29:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.642 23:29:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.643 23:29:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.643 23:29:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.643 23:29:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.643 23:29:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.643 23:29:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.643 23:29:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.643 23:29:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.643 23:29:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.643 23:29:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.643 23:29:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.643 23:29:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.643 23:29:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.643 23:29:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.643 No valid GPT data, bailing 00:03:08.643 23:29:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.643 23:29:43 -- scripts/common.sh@391 -- # pt= 00:03:08.643 23:29:43 -- scripts/common.sh@392 -- # return 1 00:03:08.643 23:29:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.643 1+0 records in 00:03:08.643 1+0 records out 00:03:08.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212348 s, 494 MB/s 00:03:08.643 23:29:43 -- spdk/autotest.sh@118 -- # sync 00:03:08.643 23:29:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.643 23:29:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.643 23:29:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:10.546 23:29:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:10.546 23:29:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:10.546 23:29:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.546 23:29:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.546 23:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.546 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:03:10.546 ************************************ 00:03:10.546 START TEST setup.sh 00:03:10.546 ************************************ 00:03:10.546 23:29:45 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.546 * Looking for test storage... 00:03:10.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.547 23:29:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:10.547 23:29:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:10.547 23:29:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:10.547 23:29:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.547 23:29:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.547 23:29:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.805 ************************************ 00:03:10.805 START TEST acl 00:03:10.805 ************************************ 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:10.805 * Looking for test storage... 00:03:10.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.805 23:29:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:10.805 23:29:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:10.805 23:29:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.805 23:29:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.176 23:29:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:12.176 23:29:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:12.176 23:29:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.176 23:29:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:12.176 23:29:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.176 23:29:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:13.551 Hugepages 00:03:13.552 node hugesize free / total 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 00:03:13.552 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:13.552 23:29:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:13.552 23:29:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.552 23:29:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.552 23:29:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.552 ************************************ 00:03:13.552 START TEST denied 00:03:13.552 ************************************ 00:03:13.552 23:29:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:13.552 23:29:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:13.552 23:29:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:13.552 23:29:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:13.552 23:29:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.552 23:29:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.451 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.451 23:29:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.999 00:03:17.999 real 0m4.065s 00:03:17.999 user 0m1.151s 00:03:17.999 sys 0m1.929s 00:03:17.999 23:29:52 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.999 23:29:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:17.999 ************************************ 00:03:17.999 END TEST denied 00:03:17.999 ************************************ 00:03:17.999 23:29:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:17.999 23:29:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:17.999 23:29:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.999 23:29:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.999 23:29:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.999 ************************************ 00:03:17.999 START TEST allowed 00:03:17.999 ************************************ 00:03:17.999 23:29:52 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:17.999 23:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:17.999 23:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:17.999 23:29:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:17.999 23:29:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.999 23:29:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.527 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.527 23:29:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:20.527 23:29:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:20.527 23:29:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:20.527 23:29:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.527 23:29:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.903 00:03:21.903 real 0m3.984s 00:03:21.903 user 0m1.070s 00:03:21.903 sys 0m1.808s 00:03:21.903 23:29:56 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.904 23:29:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:21.904 ************************************ 00:03:21.904 END TEST allowed 00:03:21.904 ************************************ 00:03:21.904 23:29:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:21.904 00:03:21.904 real 0m11.063s 00:03:21.904 user 0m3.396s 00:03:21.904 sys 0m5.643s 00:03:21.904 23:29:56 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.904 23:29:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.904 ************************************ 00:03:21.904 END TEST acl 00:03:21.904 ************************************ 00:03:21.904 23:29:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:21.904 23:29:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:21.904 23:29:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.904 23:29:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.904 23:29:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:21.904 ************************************ 00:03:21.904 START TEST hugepages 00:03:21.904 ************************************ 00:03:21.904 23:29:56 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:21.904 * Looking for test storage... 00:03:21.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 39781600 kB' 'MemAvailable: 43327836 kB' 'Buffers: 2704 kB' 'Cached: 14276896 kB' 'SwapCached: 0 kB' 'Active: 11228528 kB' 'Inactive: 3510464 kB' 'Active(anon): 10792112 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462740 kB' 'Mapped: 196392 kB' 'Shmem: 10332720 kB' 'KReclaimable: 186388 kB' 'Slab: 540468 kB' 'SReclaimable: 186388 kB' 'SUnreclaim: 354080 kB' 'KernelStack: 12896 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 11937856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.904 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.905 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:21.906 23:29:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:21.906 23:29:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.906 23:29:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.906 23:29:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.906 ************************************ 00:03:21.906 START TEST default_setup 00:03:21.906 ************************************ 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.906 23:29:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.283 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:23.283 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:23.283 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:23.284 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:24.221 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41897584 kB' 'MemAvailable: 45443820 kB' 'Buffers: 2704 kB' 'Cached: 14276988 kB' 'SwapCached: 0 kB' 'Active: 11249420 kB' 'Inactive: 3510464 kB' 'Active(anon): 10813004 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483572 kB' 'Mapped: 196504 kB' 'Shmem: 10332812 kB' 'KReclaimable: 186388 kB' 'Slab: 539960 kB' 'SReclaimable: 186388 kB' 'SUnreclaim: 353572 kB' 'KernelStack: 12832 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11927332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.485 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.486 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41897540 kB' 'MemAvailable: 45443776 kB' 'Buffers: 2704 kB' 'Cached: 14276988 kB' 'SwapCached: 0 kB' 'Active: 11251740 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815324 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485744 kB' 'Mapped: 196120 kB' 'Shmem: 10332812 kB' 'KReclaimable: 186388 kB' 'Slab: 539936 kB' 'SReclaimable: 186388 kB' 'SUnreclaim: 353548 kB' 'KernelStack: 12784 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11930400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.487 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.488 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41898532 kB' 'MemAvailable: 45444768 kB' 'Buffers: 2704 kB' 'Cached: 14277008 kB' 'SwapCached: 0 kB' 'Active: 11251628 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815212 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485576 kB' 'Mapped: 196372 kB' 'Shmem: 10332832 kB' 'KReclaimable: 186388 kB' 'Slab: 539944 kB' 'SReclaimable: 186388 kB' 'SUnreclaim: 353556 kB' 'KernelStack: 12864 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11930420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.491 nr_hugepages=1024 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.491 resv_hugepages=0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.491 surplus_hugepages=0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.491 anon_hugepages=0 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41898532 kB' 'MemAvailable: 45444768 kB' 'Buffers: 2704 kB' 'Cached: 14277032 kB' 'SwapCached: 0 kB' 'Active: 11245932 kB' 'Inactive: 3510464 kB' 'Active(anon): 10809516 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479888 kB' 'Mapped: 195500 kB' 'Shmem: 10332856 kB' 'KReclaimable: 186388 kB' 'Slab: 539944 kB' 'SReclaimable: 186388 kB' 'SUnreclaim: 353556 kB' 'KernelStack: 12848 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11924324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 26027116 kB' 'MemUsed: 6849824 kB' 'SwapCached: 0 kB' 'Active: 3733868 kB' 'Inactive: 201288 kB' 'Active(anon): 3560736 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858044 kB' 'Mapped: 94064 kB' 'AnonPages: 80212 kB' 'Shmem: 3483624 kB' 'KernelStack: 6296 kB' 'PageTables: 2704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57700 kB' 'Slab: 239832 kB' 'SReclaimable: 57700 kB' 'SUnreclaim: 182132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.494 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.495 node0=1024 expecting 1024 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.495 00:03:24.495 real 0m2.608s 00:03:24.495 user 0m0.701s 00:03:24.495 sys 0m1.018s 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.495 23:29:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:24.495 ************************************ 00:03:24.495 END TEST default_setup 00:03:24.495 ************************************ 00:03:24.495 23:29:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.495 23:29:59 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:24.495 23:29:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.495 23:29:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.495 23:29:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.495 ************************************ 00:03:24.495 START TEST per_node_1G_alloc 00:03:24.495 ************************************ 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.495 23:29:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.886 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.886 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.886 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.886 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.886 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.886 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.886 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.886 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.886 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.886 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.886 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.886 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.886 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.886 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.886 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.886 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.886 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.886 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41850572 kB' 'MemAvailable: 45396816 kB' 'Buffers: 2704 kB' 'Cached: 14277104 kB' 'SwapCached: 0 kB' 'Active: 11251928 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815512 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486072 kB' 'Mapped: 195992 kB' 'Shmem: 10332928 kB' 'KReclaimable: 186404 kB' 'Slab: 539780 kB' 'SReclaimable: 186404 kB' 'SUnreclaim: 353376 kB' 'KernelStack: 12976 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11932652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.887 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41846288 kB' 'MemAvailable: 45392532 kB' 'Buffers: 2704 kB' 'Cached: 14277108 kB' 'SwapCached: 0 kB' 'Active: 11254372 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817956 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488832 kB' 'Mapped: 195980 kB' 'Shmem: 10332932 kB' 'KReclaimable: 186404 kB' 'Slab: 539780 kB' 'SReclaimable: 186404 kB' 'SUnreclaim: 353376 kB' 'KernelStack: 12976 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11935324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.888 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.889 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41846304 kB' 'MemAvailable: 45392548 kB' 'Buffers: 2704 kB' 'Cached: 14277124 kB' 'SwapCached: 0 kB' 'Active: 11254624 kB' 'Inactive: 3510464 kB' 'Active(anon): 10818208 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488680 kB' 'Mapped: 196304 kB' 'Shmem: 10332948 kB' 'KReclaimable: 186404 kB' 'Slab: 539928 kB' 'SReclaimable: 186404 kB' 'SUnreclaim: 353524 kB' 'KernelStack: 12960 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11935348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.890 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.891 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.892 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.893 nr_hugepages=1024 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.893 resv_hugepages=0 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.893 surplus_hugepages=0 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.893 anon_hugepages=0 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41846304 kB' 'MemAvailable: 45392548 kB' 'Buffers: 2704 kB' 'Cached: 14277148 kB' 'SwapCached: 0 kB' 'Active: 11250096 kB' 'Inactive: 3510464 kB' 'Active(anon): 10813680 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484168 kB' 'Mapped: 196304 kB' 'Shmem: 10332972 kB' 'KReclaimable: 186404 kB' 'Slab: 539928 kB' 'SReclaimable: 186404 kB' 'SUnreclaim: 353524 kB' 'KernelStack: 12976 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11930736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.893 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.894 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 27027932 kB' 'MemUsed: 5849008 kB' 'SwapCached: 0 kB' 'Active: 3740868 kB' 'Inactive: 201288 kB' 'Active(anon): 3567736 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858048 kB' 'Mapped: 94508 kB' 'AnonPages: 87380 kB' 'Shmem: 3483628 kB' 'KernelStack: 6360 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57700 kB' 'Slab: 239932 kB' 'SReclaimable: 57700 kB' 'SUnreclaim: 182232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.895 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.896 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 14815028 kB' 'MemUsed: 12849756 kB' 'SwapCached: 0 kB' 'Active: 7512688 kB' 'Inactive: 3309176 kB' 'Active(anon): 7249404 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3309176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10421828 kB' 'Mapped: 101412 kB' 'AnonPages: 400128 kB' 'Shmem: 6849368 kB' 'KernelStack: 6552 kB' 'PageTables: 5032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128704 kB' 'Slab: 299988 kB' 'SReclaimable: 128704 kB' 'SUnreclaim: 171284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.897 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.898 node0=512 expecting 512 00:03:25.898 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.157 node1=512 expecting 512 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.157 00:03:26.157 real 0m1.423s 00:03:26.157 user 0m0.598s 00:03:26.157 sys 0m0.785s 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.157 23:30:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.157 ************************************ 00:03:26.157 END TEST per_node_1G_alloc 00:03:26.157 ************************************ 00:03:26.157 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.157 23:30:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.157 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.157 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.157 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.157 ************************************ 00:03:26.157 START TEST even_2G_alloc 00:03:26.157 ************************************ 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.157 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.158 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.094 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.094 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.094 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.094 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.094 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.094 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.094 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.094 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.094 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.094 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.094 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.094 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.094 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.094 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.095 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.095 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.095 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41818856 kB' 'MemAvailable: 45365068 kB' 'Buffers: 2704 kB' 'Cached: 14277236 kB' 'SwapCached: 0 kB' 'Active: 11255760 kB' 'Inactive: 3510464 kB' 'Active(anon): 10819344 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489800 kB' 'Mapped: 196384 kB' 'Shmem: 10333060 kB' 'KReclaimable: 186340 kB' 'Slab: 539792 kB' 'SReclaimable: 186340 kB' 'SUnreclaim: 353452 kB' 'KernelStack: 12960 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11933736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41819736 kB' 'MemAvailable: 45365948 kB' 'Buffers: 2704 kB' 'Cached: 14277240 kB' 'SwapCached: 0 kB' 'Active: 11254544 kB' 'Inactive: 3510464 kB' 'Active(anon): 10818128 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488388 kB' 'Mapped: 196368 kB' 'Shmem: 10333064 kB' 'KReclaimable: 186340 kB' 'Slab: 539784 kB' 'SReclaimable: 186340 kB' 'SUnreclaim: 353444 kB' 'KernelStack: 12928 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11933752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.364 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.365 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41820328 kB' 'MemAvailable: 45366540 kB' 'Buffers: 2704 kB' 'Cached: 14277260 kB' 'SwapCached: 0 kB' 'Active: 11254772 kB' 'Inactive: 3510464 kB' 'Active(anon): 10818356 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488584 kB' 'Mapped: 196368 kB' 'Shmem: 10333084 kB' 'KReclaimable: 186340 kB' 'Slab: 539844 kB' 'SReclaimable: 186340 kB' 'SUnreclaim: 353504 kB' 'KernelStack: 12960 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11934140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.366 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.367 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.368 nr_hugepages=1024 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.368 resv_hugepages=0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.368 surplus_hugepages=0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.368 anon_hugepages=0 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41819896 kB' 'MemAvailable: 45366108 kB' 'Buffers: 2704 kB' 'Cached: 14277284 kB' 'SwapCached: 0 kB' 'Active: 11254796 kB' 'Inactive: 3510464 kB' 'Active(anon): 10818380 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488624 kB' 'Mapped: 196368 kB' 'Shmem: 10333108 kB' 'KReclaimable: 186340 kB' 'Slab: 539844 kB' 'SReclaimable: 186340 kB' 'SUnreclaim: 353504 kB' 'KernelStack: 12976 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11934164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196148 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.368 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.369 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 27012076 kB' 'MemUsed: 5864864 kB' 'SwapCached: 0 kB' 'Active: 3735364 kB' 'Inactive: 201288 kB' 'Active(anon): 3562232 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858112 kB' 'Mapped: 94956 kB' 'AnonPages: 81684 kB' 'Shmem: 3483692 kB' 'KernelStack: 6296 kB' 'PageTables: 2752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57668 kB' 'Slab: 239864 kB' 'SReclaimable: 57668 kB' 'SUnreclaim: 182196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.370 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.371 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.372 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.685 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.685 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.685 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 14806812 kB' 'MemUsed: 12857972 kB' 'SwapCached: 0 kB' 'Active: 7519464 kB' 'Inactive: 3309176 kB' 'Active(anon): 7256180 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3309176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10421892 kB' 'Mapped: 101412 kB' 'AnonPages: 406912 kB' 'Shmem: 6849432 kB' 'KernelStack: 6664 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128672 kB' 'Slab: 299980 kB' 'SReclaimable: 128672 kB' 'SUnreclaim: 171308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.686 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.687 node0=512 expecting 512 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.687 node1=512 expecting 512 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.687 00:03:27.687 real 0m1.446s 00:03:27.687 user 0m0.603s 00:03:27.687 sys 0m0.808s 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.687 23:30:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.687 ************************************ 00:03:27.687 END TEST even_2G_alloc 00:03:27.687 ************************************ 00:03:27.687 23:30:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.687 23:30:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.687 23:30:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.687 23:30:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.687 23:30:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.687 ************************************ 00:03:27.687 START TEST odd_alloc 00:03:27.687 ************************************ 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.687 23:30:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.623 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.623 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.623 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.883 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.883 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.883 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.883 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.883 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.883 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.883 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.883 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.883 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.883 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.883 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.883 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.883 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.883 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.883 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41860088 kB' 'MemAvailable: 45406292 kB' 'Buffers: 2704 kB' 'Cached: 14277372 kB' 'SwapCached: 0 kB' 'Active: 11254168 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817752 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487276 kB' 'Mapped: 195488 kB' 'Shmem: 10333196 kB' 'KReclaimable: 186324 kB' 'Slab: 539704 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353380 kB' 'KernelStack: 13216 kB' 'PageTables: 10040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 11921856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.884 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41858828 kB' 'MemAvailable: 45405032 kB' 'Buffers: 2704 kB' 'Cached: 14277376 kB' 'SwapCached: 0 kB' 'Active: 11253740 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817324 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486868 kB' 'Mapped: 195488 kB' 'Shmem: 10333200 kB' 'KReclaimable: 186324 kB' 'Slab: 539688 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353364 kB' 'KernelStack: 13184 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 11919648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.885 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.886 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41859032 kB' 'MemAvailable: 45405236 kB' 'Buffers: 2704 kB' 'Cached: 14277396 kB' 'SwapCached: 0 kB' 'Active: 11251656 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815240 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485144 kB' 'Mapped: 195352 kB' 'Shmem: 10333220 kB' 'KReclaimable: 186324 kB' 'Slab: 539708 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353384 kB' 'KernelStack: 12864 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 11919668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.887 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.888 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:28.889 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.151 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:29.152 nr_hugepages=1025 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.152 resv_hugepages=0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.152 surplus_hugepages=0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.152 anon_hugepages=0 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41859032 kB' 'MemAvailable: 45405236 kB' 'Buffers: 2704 kB' 'Cached: 14277396 kB' 'SwapCached: 0 kB' 'Active: 11253440 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817024 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487804 kB' 'Mapped: 195352 kB' 'Shmem: 10333220 kB' 'KReclaimable: 186324 kB' 'Slab: 539708 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353384 kB' 'KernelStack: 12848 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 11922760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.152 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.153 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 27043496 kB' 'MemUsed: 5833444 kB' 'SwapCached: 0 kB' 'Active: 3741084 kB' 'Inactive: 201288 kB' 'Active(anon): 3567952 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858164 kB' 'Mapped: 95012 kB' 'AnonPages: 87344 kB' 'Shmem: 3483744 kB' 'KernelStack: 6360 kB' 'PageTables: 2916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57668 kB' 'Slab: 239792 kB' 'SReclaimable: 57668 kB' 'SUnreclaim: 182124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.154 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 14813780 kB' 'MemUsed: 12851004 kB' 'SwapCached: 0 kB' 'Active: 7515544 kB' 'Inactive: 3309176 kB' 'Active(anon): 7252260 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3309176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10421984 kB' 'Mapped: 100340 kB' 'AnonPages: 402820 kB' 'Shmem: 6849524 kB' 'KernelStack: 6536 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128656 kB' 'Slab: 300012 kB' 'SReclaimable: 128656 kB' 'SUnreclaim: 171356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.155 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.156 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:29.157 node0=512 expecting 513 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:29.157 node1=513 expecting 512 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:29.157 00:03:29.157 real 0m1.540s 00:03:29.157 user 0m0.630s 00:03:29.157 sys 0m0.872s 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.157 23:30:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.157 ************************************ 00:03:29.157 END TEST odd_alloc 00:03:29.157 ************************************ 00:03:29.157 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.157 23:30:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:29.157 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.157 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.157 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.157 ************************************ 00:03:29.157 START TEST custom_alloc 00:03:29.157 ************************************ 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:29.157 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.158 23:30:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.540 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.540 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.540 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.540 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.540 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.540 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.540 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.540 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.540 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.540 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.540 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.540 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.540 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.540 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.540 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.540 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.540 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.540 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:30.540 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:30.540 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40832800 kB' 'MemAvailable: 44379004 kB' 'Buffers: 2704 kB' 'Cached: 14277504 kB' 'SwapCached: 0 kB' 'Active: 11251652 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815236 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485144 kB' 'Mapped: 195324 kB' 'Shmem: 10333328 kB' 'KReclaimable: 186324 kB' 'Slab: 539744 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353420 kB' 'KernelStack: 12864 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 11919756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.541 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40833324 kB' 'MemAvailable: 44379528 kB' 'Buffers: 2704 kB' 'Cached: 14277504 kB' 'SwapCached: 0 kB' 'Active: 11251988 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815572 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485452 kB' 'Mapped: 195320 kB' 'Shmem: 10333328 kB' 'KReclaimable: 186324 kB' 'Slab: 539720 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353396 kB' 'KernelStack: 12880 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 11919776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.542 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40833324 kB' 'MemAvailable: 44379528 kB' 'Buffers: 2704 kB' 'Cached: 14277524 kB' 'SwapCached: 0 kB' 'Active: 11251660 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815244 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485140 kB' 'Mapped: 195320 kB' 'Shmem: 10333348 kB' 'KReclaimable: 186324 kB' 'Slab: 539732 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353408 kB' 'KernelStack: 12864 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 11919796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.544 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.545 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:30.546 nr_hugepages=1536 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.546 resv_hugepages=0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.546 surplus_hugepages=0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.546 anon_hugepages=0 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 40832568 kB' 'MemAvailable: 44378772 kB' 'Buffers: 2704 kB' 'Cached: 14277528 kB' 'SwapCached: 0 kB' 'Active: 11252028 kB' 'Inactive: 3510464 kB' 'Active(anon): 10815612 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485548 kB' 'Mapped: 195320 kB' 'Shmem: 10333352 kB' 'KReclaimable: 186324 kB' 'Slab: 539732 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353408 kB' 'KernelStack: 12896 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 11919452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.546 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.547 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 27047312 kB' 'MemUsed: 5829628 kB' 'SwapCached: 0 kB' 'Active: 3735540 kB' 'Inactive: 201288 kB' 'Active(anon): 3562408 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858172 kB' 'Mapped: 94980 kB' 'AnonPages: 81824 kB' 'Shmem: 3483752 kB' 'KernelStack: 6248 kB' 'PageTables: 2728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57668 kB' 'Slab: 239864 kB' 'SReclaimable: 57668 kB' 'SUnreclaim: 182196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.548 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 13785016 kB' 'MemUsed: 13879768 kB' 'SwapCached: 0 kB' 'Active: 7516204 kB' 'Inactive: 3309176 kB' 'Active(anon): 7252920 kB' 'Inactive(anon): 0 kB' 'Active(file): 263284 kB' 'Inactive(file): 3309176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10422120 kB' 'Mapped: 100340 kB' 'AnonPages: 403380 kB' 'Shmem: 6849660 kB' 'KernelStack: 6600 kB' 'PageTables: 5124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128656 kB' 'Slab: 299868 kB' 'SReclaimable: 128656 kB' 'SUnreclaim: 171212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.549 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.550 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.551 node0=512 expecting 512 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:30.551 node1=1024 expecting 1024 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:30.551 00:03:30.551 real 0m1.517s 00:03:30.551 user 0m0.647s 00:03:30.551 sys 0m0.831s 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.551 23:30:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.551 ************************************ 00:03:30.551 END TEST custom_alloc 00:03:30.551 ************************************ 00:03:30.809 23:30:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.809 23:30:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:30.810 23:30:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.810 23:30:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.810 23:30:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.810 ************************************ 00:03:30.810 START TEST no_shrink_alloc 00:03:30.810 ************************************ 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.810 23:30:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.193 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.193 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.193 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.193 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.193 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.193 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.193 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.193 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.193 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.193 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.193 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.193 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.193 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.193 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.193 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.193 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.193 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41846640 kB' 'MemAvailable: 45392844 kB' 'Buffers: 2704 kB' 'Cached: 14277636 kB' 'SwapCached: 0 kB' 'Active: 11248964 kB' 'Inactive: 3510464 kB' 'Active(anon): 10812548 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482452 kB' 'Mapped: 194556 kB' 'Shmem: 10333460 kB' 'KReclaimable: 186324 kB' 'Slab: 539568 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353244 kB' 'KernelStack: 12896 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11915924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.193 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.194 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41845820 kB' 'MemAvailable: 45392024 kB' 'Buffers: 2704 kB' 'Cached: 14277636 kB' 'SwapCached: 0 kB' 'Active: 11249996 kB' 'Inactive: 3510464 kB' 'Active(anon): 10813580 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483472 kB' 'Mapped: 194600 kB' 'Shmem: 10333460 kB' 'KReclaimable: 186324 kB' 'Slab: 539548 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353224 kB' 'KernelStack: 13152 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11917332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.195 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.196 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.197 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41849576 kB' 'MemAvailable: 45395780 kB' 'Buffers: 2704 kB' 'Cached: 14277652 kB' 'SwapCached: 0 kB' 'Active: 11248276 kB' 'Inactive: 3510464 kB' 'Active(anon): 10811860 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481624 kB' 'Mapped: 194532 kB' 'Shmem: 10333476 kB' 'KReclaimable: 186324 kB' 'Slab: 539540 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353216 kB' 'KernelStack: 13104 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11916328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.198 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.199 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.200 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.201 nr_hugepages=1024 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.201 resv_hugepages=0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.201 surplus_hugepages=0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.201 anon_hugepages=0 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41849608 kB' 'MemAvailable: 45395812 kB' 'Buffers: 2704 kB' 'Cached: 14277680 kB' 'SwapCached: 0 kB' 'Active: 11248584 kB' 'Inactive: 3510464 kB' 'Active(anon): 10812168 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481976 kB' 'Mapped: 194472 kB' 'Shmem: 10333504 kB' 'KReclaimable: 186324 kB' 'Slab: 539528 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353204 kB' 'KernelStack: 13152 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11916352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.201 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.202 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.203 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 25988376 kB' 'MemUsed: 6888564 kB' 'SwapCached: 0 kB' 'Active: 3736872 kB' 'Inactive: 201288 kB' 'Active(anon): 3563740 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858192 kB' 'Mapped: 94140 kB' 'AnonPages: 83188 kB' 'Shmem: 3483772 kB' 'KernelStack: 6376 kB' 'PageTables: 2692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57668 kB' 'Slab: 239664 kB' 'SReclaimable: 57668 kB' 'SUnreclaim: 181996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.204 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.205 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.206 node0=1024 expecting 1024 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.206 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.587 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:33.587 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:33.587 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:33.587 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:33.587 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:33.587 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:33.587 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:33.587 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:33.587 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:33.587 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.587 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:33.587 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:33.587 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:33.587 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:33.587 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:33.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:33.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:33.588 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41838464 kB' 'MemAvailable: 45384668 kB' 'Buffers: 2704 kB' 'Cached: 14277744 kB' 'SwapCached: 0 kB' 'Active: 11254848 kB' 'Inactive: 3510464 kB' 'Active(anon): 10818432 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488556 kB' 'Mapped: 195200 kB' 'Shmem: 10333568 kB' 'KReclaimable: 186324 kB' 'Slab: 539612 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353288 kB' 'KernelStack: 13392 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11923756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.588 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41840444 kB' 'MemAvailable: 45386648 kB' 'Buffers: 2704 kB' 'Cached: 14277744 kB' 'SwapCached: 0 kB' 'Active: 11249412 kB' 'Inactive: 3510464 kB' 'Active(anon): 10812996 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483084 kB' 'Mapped: 194784 kB' 'Shmem: 10333568 kB' 'KReclaimable: 186324 kB' 'Slab: 539588 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353264 kB' 'KernelStack: 13408 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11918744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196512 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:33.589 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.590 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.591 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41839968 kB' 'MemAvailable: 45386172 kB' 'Buffers: 2704 kB' 'Cached: 14277744 kB' 'SwapCached: 0 kB' 'Active: 11253576 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817160 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486808 kB' 'Mapped: 194784 kB' 'Shmem: 10333568 kB' 'KReclaimable: 186324 kB' 'Slab: 539580 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353256 kB' 'KernelStack: 13456 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11919840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.592 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.593 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.594 nr_hugepages=1024 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.594 resv_hugepages=0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.594 surplus_hugepages=0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.594 anon_hugepages=0 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41841028 kB' 'MemAvailable: 45387232 kB' 'Buffers: 2704 kB' 'Cached: 14277764 kB' 'SwapCached: 0 kB' 'Active: 11253508 kB' 'Inactive: 3510464 kB' 'Active(anon): 10817092 kB' 'Inactive(anon): 0 kB' 'Active(file): 436416 kB' 'Inactive(file): 3510464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486196 kB' 'Mapped: 194844 kB' 'Shmem: 10333588 kB' 'KReclaimable: 186324 kB' 'Slab: 539604 kB' 'SReclaimable: 186324 kB' 'SUnreclaim: 353280 kB' 'KernelStack: 12912 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 11921456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196228 kB' 'VmallocChunk: 0 kB' 'Percpu: 31872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1611356 kB' 'DirectMap2M: 20328448 kB' 'DirectMap1G: 47185920 kB' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.594 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.595 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 25990848 kB' 'MemUsed: 6886092 kB' 'SwapCached: 0 kB' 'Active: 3736200 kB' 'Inactive: 201288 kB' 'Active(anon): 3563068 kB' 'Inactive(anon): 0 kB' 'Active(file): 173132 kB' 'Inactive(file): 201288 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3858196 kB' 'Mapped: 94992 kB' 'AnonPages: 82432 kB' 'Shmem: 3483776 kB' 'KernelStack: 6264 kB' 'PageTables: 2876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 57668 kB' 'Slab: 239824 kB' 'SReclaimable: 57668 kB' 'SUnreclaim: 182156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.596 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.597 node0=1024 expecting 1024 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.597 00:03:33.597 real 0m2.885s 00:03:33.597 user 0m1.158s 00:03:33.597 sys 0m1.654s 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.597 23:30:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.597 ************************************ 00:03:33.597 END TEST no_shrink_alloc 00:03:33.597 ************************************ 00:03:33.597 23:30:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.597 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.598 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.598 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.598 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.598 23:30:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.598 00:03:33.598 real 0m11.818s 00:03:33.598 user 0m4.527s 00:03:33.598 sys 0m6.203s 00:03:33.598 23:30:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.598 23:30:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.598 ************************************ 00:03:33.598 END TEST hugepages 00:03:33.598 ************************************ 00:03:33.598 23:30:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.598 23:30:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:33.598 23:30:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.598 23:30:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.598 23:30:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.598 ************************************ 00:03:33.598 START TEST driver 00:03:33.598 ************************************ 00:03:33.598 23:30:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:33.856 * Looking for test storage... 00:03:33.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.856 23:30:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:33.856 23:30:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.856 23:30:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.391 23:30:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.391 23:30:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.391 23:30:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.391 23:30:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.391 ************************************ 00:03:36.391 START TEST guess_driver 00:03:36.391 ************************************ 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.391 Looking for driver=vfio-pci 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.391 23:30:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.767 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.768 23:30:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.703 23:30:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.994 00:03:41.994 real 0m5.078s 00:03:41.994 user 0m1.159s 00:03:41.994 sys 0m1.907s 00:03:41.994 23:30:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.994 23:30:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.994 ************************************ 00:03:41.994 END TEST guess_driver 00:03:41.994 ************************************ 00:03:41.994 23:30:16 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:41.994 00:03:41.994 real 0m7.752s 00:03:41.994 user 0m1.771s 00:03:41.994 sys 0m2.941s 00:03:41.994 23:30:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.994 23:30:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.994 ************************************ 00:03:41.994 END TEST driver 00:03:41.994 ************************************ 00:03:41.994 23:30:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.994 23:30:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.994 23:30:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.994 23:30:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.994 23:30:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.994 ************************************ 00:03:41.994 START TEST devices 00:03:41.994 ************************************ 00:03:41.994 23:30:16 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.994 * Looking for test storage... 00:03:41.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.994 23:30:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.994 23:30:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:41.994 23:30:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.994 23:30:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.929 23:30:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:42.929 23:30:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:42.929 23:30:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:42.929 23:30:18 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:43.187 No valid GPT data, bailing 00:03:43.187 23:30:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.187 23:30:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:43.187 23:30:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:43.187 23:30:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:43.187 23:30:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:43.187 23:30:18 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:43.187 23:30:18 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:43.187 23:30:18 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.187 23:30:18 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.187 23:30:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:43.187 ************************************ 00:03:43.187 START TEST nvme_mount 00:03:43.187 ************************************ 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.187 23:30:18 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:44.124 Creating new GPT entries in memory. 00:03:44.124 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.124 other utilities. 00:03:44.124 23:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.124 23:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.124 23:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.124 23:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.124 23:30:19 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.060 Creating new GPT entries in memory. 00:03:45.060 The operation has completed successfully. 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3650626 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:45.060 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.317 23:30:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:46.251 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:46.514 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.514 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.772 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:46.772 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:46.772 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.772 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:46.772 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.047 23:30:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.004 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:48.005 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.263 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.264 23:30:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.641 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.641 00:03:49.641 real 0m6.579s 00:03:49.641 user 0m1.571s 00:03:49.641 sys 0m2.589s 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.641 23:30:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:49.641 ************************************ 00:03:49.641 END TEST nvme_mount 00:03:49.641 ************************************ 00:03:49.641 23:30:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:49.641 23:30:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:49.641 23:30:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.641 23:30:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.641 23:30:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.641 ************************************ 00:03:49.641 START TEST dm_mount 00:03:49.641 ************************************ 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.641 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.642 23:30:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:51.018 Creating new GPT entries in memory. 00:03:51.018 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.018 other utilities. 00:03:51.018 23:30:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.018 23:30:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.018 23:30:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.018 23:30:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.018 23:30:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.958 Creating new GPT entries in memory. 00:03:51.958 The operation has completed successfully. 00:03:51.958 23:30:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.958 23:30:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.958 23:30:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.958 23:30:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.958 23:30:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:52.894 The operation has completed successfully. 00:03:52.894 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.894 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3653022 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.895 23:30:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:53.833 23:30:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.093 23:30:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.473 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:55.474 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:55.474 00:03:55.474 real 0m5.745s 00:03:55.474 user 0m0.992s 00:03:55.474 sys 0m1.635s 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.474 23:30:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.474 ************************************ 00:03:55.474 END TEST dm_mount 00:03:55.474 ************************************ 00:03:55.474 23:30:30 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.474 23:30:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.733 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.733 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.733 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.733 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.733 23:30:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:55.733 00:03:55.733 real 0m14.320s 00:03:55.733 user 0m3.201s 00:03:55.733 sys 0m5.351s 00:03:55.733 23:30:30 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.733 23:30:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.733 ************************************ 00:03:55.733 END TEST devices 00:03:55.733 ************************************ 00:03:55.733 23:30:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.733 00:03:55.733 real 0m45.196s 00:03:55.733 user 0m12.982s 00:03:55.733 sys 0m20.310s 00:03:55.733 23:30:30 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.733 23:30:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.733 ************************************ 00:03:55.733 END TEST setup.sh 00:03:55.733 ************************************ 00:03:55.733 23:30:30 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.733 23:30:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.114 Hugepages 00:03:57.114 node hugesize free / total 00:03:57.114 node0 1048576kB 0 / 0 00:03:57.114 node0 2048kB 2048 / 2048 00:03:57.114 node1 1048576kB 0 / 0 00:03:57.114 node1 2048kB 0 / 0 00:03:57.114 00:03:57.114 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.114 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:57.114 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:57.114 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:57.114 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:57.114 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:57.114 23:30:32 -- spdk/autotest.sh@130 -- # uname -s 00:03:57.114 23:30:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:57.114 23:30:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:57.114 23:30:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.492 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:58.492 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:58.492 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.433 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.433 23:30:34 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:00.371 23:30:35 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:00.371 23:30:35 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:00.371 23:30:35 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.371 23:30:35 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:00.371 23:30:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:00.371 23:30:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:00.371 23:30:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.371 23:30:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.371 23:30:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:00.629 23:30:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:00.629 23:30:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:00.629 23:30:35 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.561 Waiting for block devices as requested 00:04:01.561 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.561 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:01.820 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:01.820 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:01.820 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:01.820 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.078 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.078 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:02.078 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:02.337 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:02.337 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:02.594 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:02.594 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:02.594 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:02.594 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.851 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.851 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:02.851 23:30:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:02.851 23:30:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:04:02.851 23:30:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:02.851 23:30:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:02.851 23:30:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:02.851 23:30:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:02.851 23:30:37 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:02.851 23:30:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:02.851 23:30:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:02.851 23:30:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:02.851 23:30:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:02.851 23:30:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:02.851 23:30:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:02.851 23:30:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:02.851 23:30:37 -- common/autotest_common.sh@1557 -- # continue 00:04:02.851 23:30:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:02.851 23:30:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.851 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.108 23:30:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.108 23:30:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.108 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.108 23:30:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.482 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.482 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.482 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:05.422 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.422 23:30:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:05.422 23:30:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.422 23:30:40 -- common/autotest_common.sh@10 -- # set +x 00:04:05.422 23:30:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:05.422 23:30:40 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:05.422 23:30:40 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.422 23:30:40 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:05.422 23:30:40 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:05.422 23:30:40 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:05.422 23:30:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:05.422 23:30:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:05.422 23:30:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.422 23:30:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.422 23:30:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:05.422 23:30:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:05.422 23:30:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:04:05.422 23:30:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.422 23:30:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:05.422 23:30:40 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:05.422 23:30:40 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:05.422 23:30:40 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:05.422 23:30:40 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:04:05.422 23:30:40 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:04:05.422 23:30:40 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3658317 00:04:05.422 23:30:40 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.422 23:30:40 -- common/autotest_common.sh@1598 -- # waitforlisten 3658317 00:04:05.422 23:30:40 -- common/autotest_common.sh@829 -- # '[' -z 3658317 ']' 00:04:05.422 23:30:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.422 23:30:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.422 23:30:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.422 23:30:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.422 23:30:40 -- common/autotest_common.sh@10 -- # set +x 00:04:05.681 [2024-07-15 23:30:40.572979] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:05.681 [2024-07-15 23:30:40.573062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658317 ] 00:04:05.681 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.681 [2024-07-15 23:30:40.628505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.681 [2024-07-15 23:30:40.729591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.938 23:30:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.938 23:30:40 -- common/autotest_common.sh@862 -- # return 0 00:04:05.938 23:30:40 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:05.938 23:30:40 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:05.938 23:30:40 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:09.290 nvme0n1 00:04:09.290 23:30:44 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:09.290 [2024-07-15 23:30:44.266449] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:09.290 [2024-07-15 23:30:44.266492] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:09.290 request: 00:04:09.290 { 00:04:09.290 "nvme_ctrlr_name": "nvme0", 00:04:09.290 "password": "test", 00:04:09.290 "method": "bdev_nvme_opal_revert", 00:04:09.290 "req_id": 1 00:04:09.290 } 00:04:09.290 Got JSON-RPC error response 00:04:09.290 response: 00:04:09.290 { 00:04:09.290 "code": -32603, 00:04:09.290 "message": "Internal error" 00:04:09.290 } 00:04:09.290 23:30:44 -- common/autotest_common.sh@1604 -- # true 00:04:09.290 23:30:44 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:09.290 23:30:44 -- common/autotest_common.sh@1608 -- # killprocess 3658317 00:04:09.290 23:30:44 -- common/autotest_common.sh@948 -- # '[' -z 3658317 ']' 00:04:09.290 23:30:44 -- common/autotest_common.sh@952 -- # kill -0 3658317 00:04:09.290 23:30:44 -- common/autotest_common.sh@953 -- # uname 00:04:09.291 23:30:44 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.291 23:30:44 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3658317 00:04:09.291 23:30:44 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:09.291 23:30:44 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:09.291 23:30:44 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3658317' 00:04:09.291 killing process with pid 3658317 00:04:09.291 23:30:44 -- common/autotest_common.sh@967 -- # kill 3658317 00:04:09.291 23:30:44 -- common/autotest_common.sh@972 -- # wait 3658317 00:04:11.188 23:30:46 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:11.188 23:30:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:11.188 23:30:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.188 23:30:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.188 23:30:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:11.188 23:30:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.188 23:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:11.188 23:30:46 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:11.188 23:30:46 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:11.188 23:30:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.188 23:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.188 23:30:46 -- common/autotest_common.sh@10 -- # set +x 00:04:11.188 ************************************ 00:04:11.188 START TEST env 00:04:11.188 ************************************ 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:11.188 * Looking for test storage... 00:04:11.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:11.188 23:30:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.188 23:30:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.188 ************************************ 00:04:11.188 START TEST env_memory 00:04:11.188 ************************************ 00:04:11.188 23:30:46 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.188 00:04:11.188 00:04:11.188 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.188 http://cunit.sourceforge.net/ 00:04:11.188 00:04:11.188 00:04:11.188 Suite: memory 00:04:11.188 Test: alloc and free memory map ...[2024-07-15 23:30:46.148846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:11.188 passed 00:04:11.188 Test: mem map translation ...[2024-07-15 23:30:46.169833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:11.188 [2024-07-15 23:30:46.169854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:11.188 [2024-07-15 23:30:46.169910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:11.188 [2024-07-15 23:30:46.169921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:11.188 passed 00:04:11.188 Test: mem map registration ...[2024-07-15 23:30:46.210715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:11.188 [2024-07-15 23:30:46.210733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:11.188 passed 00:04:11.188 Test: mem map adjacent registrations ...passed 00:04:11.188 00:04:11.188 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.188 suites 1 1 n/a 0 0 00:04:11.188 tests 4 4 4 0 0 00:04:11.188 asserts 152 152 152 0 n/a 00:04:11.188 00:04:11.188 Elapsed time = 0.143 seconds 00:04:11.188 00:04:11.188 real 0m0.151s 00:04:11.188 user 0m0.141s 00:04:11.188 sys 0m0.010s 00:04:11.188 23:30:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.188 23:30:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:11.188 ************************************ 00:04:11.188 END TEST env_memory 00:04:11.188 ************************************ 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1142 -- # return 0 00:04:11.188 23:30:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.188 23:30:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.188 23:30:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.448 ************************************ 00:04:11.448 START TEST env_vtophys 00:04:11.448 ************************************ 00:04:11.448 23:30:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.448 EAL: lib.eal log level changed from notice to debug 00:04:11.448 EAL: Detected lcore 0 as core 0 on socket 0 00:04:11.448 EAL: Detected lcore 1 as core 1 on socket 0 00:04:11.448 EAL: Detected lcore 2 as core 2 on socket 0 00:04:11.448 EAL: Detected lcore 3 as core 3 on socket 0 00:04:11.448 EAL: Detected lcore 4 as core 4 on socket 0 00:04:11.448 EAL: Detected lcore 5 as core 5 on socket 0 00:04:11.448 EAL: Detected lcore 6 as core 8 on socket 0 00:04:11.448 EAL: Detected lcore 7 as core 9 on socket 0 00:04:11.448 EAL: Detected lcore 8 as core 10 on socket 0 00:04:11.448 EAL: Detected lcore 9 as core 11 on socket 0 00:04:11.448 EAL: Detected lcore 10 as core 12 on socket 0 00:04:11.448 EAL: Detected lcore 11 as core 13 on socket 0 00:04:11.448 EAL: Detected lcore 12 as core 0 on socket 1 00:04:11.448 EAL: Detected lcore 13 as core 1 on socket 1 00:04:11.448 EAL: Detected lcore 14 as core 2 on socket 1 00:04:11.448 EAL: Detected lcore 15 as core 3 on socket 1 00:04:11.448 EAL: Detected lcore 16 as core 4 on socket 1 00:04:11.448 EAL: Detected lcore 17 as core 5 on socket 1 00:04:11.448 EAL: Detected lcore 18 as core 8 on socket 1 00:04:11.448 EAL: Detected lcore 19 as core 9 on socket 1 00:04:11.448 EAL: Detected lcore 20 as core 10 on socket 1 00:04:11.448 EAL: Detected lcore 21 as core 11 on socket 1 00:04:11.448 EAL: Detected lcore 22 as core 12 on socket 1 00:04:11.448 EAL: Detected lcore 23 as core 13 on socket 1 00:04:11.448 EAL: Detected lcore 24 as core 0 on socket 0 00:04:11.448 EAL: Detected lcore 25 as core 1 on socket 0 00:04:11.448 EAL: Detected lcore 26 as core 2 on socket 0 00:04:11.448 EAL: Detected lcore 27 as core 3 on socket 0 00:04:11.448 EAL: Detected lcore 28 as core 4 on socket 0 00:04:11.448 EAL: Detected lcore 29 as core 5 on socket 0 00:04:11.448 EAL: Detected lcore 30 as core 8 on socket 0 00:04:11.448 EAL: Detected lcore 31 as core 9 on socket 0 00:04:11.448 EAL: Detected lcore 32 as core 10 on socket 0 00:04:11.448 EAL: Detected lcore 33 as core 11 on socket 0 00:04:11.448 EAL: Detected lcore 34 as core 12 on socket 0 00:04:11.448 EAL: Detected lcore 35 as core 13 on socket 0 00:04:11.448 EAL: Detected lcore 36 as core 0 on socket 1 00:04:11.448 EAL: Detected lcore 37 as core 1 on socket 1 00:04:11.448 EAL: Detected lcore 38 as core 2 on socket 1 00:04:11.448 EAL: Detected lcore 39 as core 3 on socket 1 00:04:11.448 EAL: Detected lcore 40 as core 4 on socket 1 00:04:11.448 EAL: Detected lcore 41 as core 5 on socket 1 00:04:11.448 EAL: Detected lcore 42 as core 8 on socket 1 00:04:11.448 EAL: Detected lcore 43 as core 9 on socket 1 00:04:11.448 EAL: Detected lcore 44 as core 10 on socket 1 00:04:11.448 EAL: Detected lcore 45 as core 11 on socket 1 00:04:11.448 EAL: Detected lcore 46 as core 12 on socket 1 00:04:11.448 EAL: Detected lcore 47 as core 13 on socket 1 00:04:11.448 EAL: Maximum logical cores by configuration: 128 00:04:11.448 EAL: Detected CPU lcores: 48 00:04:11.448 EAL: Detected NUMA nodes: 2 00:04:11.448 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:11.448 EAL: Detected shared linkage of DPDK 00:04:11.448 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.448 EAL: Bus pci wants IOVA as 'DC' 00:04:11.448 EAL: Buses did not request a specific IOVA mode. 00:04:11.448 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:11.448 EAL: Selected IOVA mode 'VA' 00:04:11.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.448 EAL: Probing VFIO support... 00:04:11.448 EAL: IOMMU type 1 (Type 1) is supported 00:04:11.448 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:11.448 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:11.448 EAL: VFIO support initialized 00:04:11.448 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.448 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.448 EAL: Setting up physically contiguous memory... 00:04:11.448 EAL: Setting maximum number of open files to 524288 00:04:11.448 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.448 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:11.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.448 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.448 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.448 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.448 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:11.448 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:11.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.448 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:11.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.448 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:11.449 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:11.449 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.449 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:11.449 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.449 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.449 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:11.449 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:11.449 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.449 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:11.449 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.449 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.449 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:11.449 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:11.449 EAL: Hugepages will be freed exactly as allocated. 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: TSC frequency is ~2700000 KHz 00:04:11.449 EAL: Main lcore 0 is ready (tid=7ff902066a00;cpuset=[0]) 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 0 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.449 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.449 00:04:11.449 00:04:11.449 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.449 http://cunit.sourceforge.net/ 00:04:11.449 00:04:11.449 00:04:11.449 Suite: components_suite 00:04:11.449 Test: vtophys_malloc_test ...passed 00:04:11.449 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.449 EAL: Restoring previous memory policy: 4 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.449 EAL: request: mp_malloc_sync 00:04:11.449 EAL: No shared files mode enabled, IPC is disabled 00:04:11.449 EAL: Heap on socket 0 was shrunk by 130MB 00:04:11.449 EAL: Trying to obtain current memory policy. 00:04:11.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.708 EAL: Restoring previous memory policy: 4 00:04:11.708 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.708 EAL: request: mp_malloc_sync 00:04:11.708 EAL: No shared files mode enabled, IPC is disabled 00:04:11.708 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.708 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.708 EAL: request: mp_malloc_sync 00:04:11.708 EAL: No shared files mode enabled, IPC is disabled 00:04:11.708 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.708 EAL: Trying to obtain current memory policy. 00:04:11.708 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.966 EAL: Restoring previous memory policy: 4 00:04:11.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.966 EAL: request: mp_malloc_sync 00:04:11.966 EAL: No shared files mode enabled, IPC is disabled 00:04:11.966 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.966 EAL: request: mp_malloc_sync 00:04:11.966 EAL: No shared files mode enabled, IPC is disabled 00:04:11.966 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.966 EAL: Trying to obtain current memory policy. 00:04:11.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.229 EAL: Restoring previous memory policy: 4 00:04:12.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.229 EAL: request: mp_malloc_sync 00:04:12.229 EAL: No shared files mode enabled, IPC is disabled 00:04:12.229 EAL: Heap on socket 0 was expanded by 1026MB 00:04:12.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.745 EAL: request: mp_malloc_sync 00:04:12.745 EAL: No shared files mode enabled, IPC is disabled 00:04:12.745 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.745 passed 00:04:12.745 00:04:12.745 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.745 suites 1 1 n/a 0 0 00:04:12.745 tests 2 2 2 0 0 00:04:12.745 asserts 497 497 497 0 n/a 00:04:12.745 00:04:12.745 Elapsed time = 1.300 seconds 00:04:12.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.745 EAL: request: mp_malloc_sync 00:04:12.745 EAL: No shared files mode enabled, IPC is disabled 00:04:12.745 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.745 EAL: No shared files mode enabled, IPC is disabled 00:04:12.745 EAL: No shared files mode enabled, IPC is disabled 00:04:12.745 EAL: No shared files mode enabled, IPC is disabled 00:04:12.745 00:04:12.745 real 0m1.409s 00:04:12.745 user 0m0.824s 00:04:12.745 sys 0m0.553s 00:04:12.745 23:30:47 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.745 23:30:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 END TEST env_vtophys 00:04:12.745 ************************************ 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:12.745 23:30:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.745 23:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 START TEST env_pci 00:04:12.745 ************************************ 00:04:12.745 23:30:47 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.745 00:04:12.745 00:04:12.745 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.745 http://cunit.sourceforge.net/ 00:04:12.745 00:04:12.745 00:04:12.745 Suite: pci 00:04:12.745 Test: pci_hook ...[2024-07-15 23:30:47.778746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3659200 has claimed it 00:04:12.745 EAL: Cannot find device (10000:00:01.0) 00:04:12.745 EAL: Failed to attach device on primary process 00:04:12.745 passed 00:04:12.745 00:04:12.745 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.745 suites 1 1 n/a 0 0 00:04:12.745 tests 1 1 1 0 0 00:04:12.745 asserts 25 25 25 0 n/a 00:04:12.745 00:04:12.745 Elapsed time = 0.021 seconds 00:04:12.745 00:04:12.745 real 0m0.034s 00:04:12.745 user 0m0.011s 00:04:12.745 sys 0m0.022s 00:04:12.745 23:30:47 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.745 23:30:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 END TEST env_pci 00:04:12.745 ************************************ 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:12.745 23:30:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.745 23:30:47 env -- env/env.sh@15 -- # uname 00:04:12.745 23:30:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.745 23:30:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.745 23:30:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:12.745 23:30:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.745 23:30:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 START TEST env_dpdk_post_init 00:04:12.745 ************************************ 00:04:12.745 23:30:47 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.003 EAL: Detected CPU lcores: 48 00:04:13.003 EAL: Detected NUMA nodes: 2 00:04:13.003 EAL: Detected shared linkage of DPDK 00:04:13.003 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.003 EAL: Selected IOVA mode 'VA' 00:04:13.003 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.003 EAL: VFIO support initialized 00:04:13.003 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.003 EAL: Using IOMMU type 1 (Type 1) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:13.003 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:13.004 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:13.938 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:13.938 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:13.938 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:13.938 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:13.938 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:13.938 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:13.939 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:13.939 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:13.939 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:17.219 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:17.219 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:17.219 Starting DPDK initialization... 00:04:17.219 Starting SPDK post initialization... 00:04:17.219 SPDK NVMe probe 00:04:17.219 Attaching to 0000:0b:00.0 00:04:17.219 Attached to 0000:0b:00.0 00:04:17.219 Cleaning up... 00:04:17.219 00:04:17.219 real 0m4.350s 00:04:17.219 user 0m3.219s 00:04:17.219 sys 0m0.187s 00:04:17.219 23:30:52 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.219 23:30:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.219 ************************************ 00:04:17.219 END TEST env_dpdk_post_init 00:04:17.219 ************************************ 00:04:17.219 23:30:52 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.219 23:30:52 env -- env/env.sh@26 -- # uname 00:04:17.219 23:30:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.220 23:30:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.220 23:30:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.220 23:30:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.220 23:30:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.220 ************************************ 00:04:17.220 START TEST env_mem_callbacks 00:04:17.220 ************************************ 00:04:17.220 23:30:52 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.220 EAL: Detected CPU lcores: 48 00:04:17.220 EAL: Detected NUMA nodes: 2 00:04:17.220 EAL: Detected shared linkage of DPDK 00:04:17.220 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.220 EAL: Selected IOVA mode 'VA' 00:04:17.220 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.220 EAL: VFIO support initialized 00:04:17.220 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.220 00:04:17.220 00:04:17.220 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.220 http://cunit.sourceforge.net/ 00:04:17.220 00:04:17.220 00:04:17.220 Suite: memory 00:04:17.220 Test: test ... 00:04:17.220 register 0x200000200000 2097152 00:04:17.220 malloc 3145728 00:04:17.220 register 0x200000400000 4194304 00:04:17.220 buf 0x200000500000 len 3145728 PASSED 00:04:17.220 malloc 64 00:04:17.220 buf 0x2000004fff40 len 64 PASSED 00:04:17.220 malloc 4194304 00:04:17.220 register 0x200000800000 6291456 00:04:17.220 buf 0x200000a00000 len 4194304 PASSED 00:04:17.220 free 0x200000500000 3145728 00:04:17.220 free 0x2000004fff40 64 00:04:17.220 unregister 0x200000400000 4194304 PASSED 00:04:17.220 free 0x200000a00000 4194304 00:04:17.220 unregister 0x200000800000 6291456 PASSED 00:04:17.220 malloc 8388608 00:04:17.220 register 0x200000400000 10485760 00:04:17.220 buf 0x200000600000 len 8388608 PASSED 00:04:17.220 free 0x200000600000 8388608 00:04:17.220 unregister 0x200000400000 10485760 PASSED 00:04:17.220 passed 00:04:17.220 00:04:17.220 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.220 suites 1 1 n/a 0 0 00:04:17.220 tests 1 1 1 0 0 00:04:17.220 asserts 15 15 15 0 n/a 00:04:17.220 00:04:17.220 Elapsed time = 0.005 seconds 00:04:17.220 00:04:17.220 real 0m0.047s 00:04:17.220 user 0m0.017s 00:04:17.220 sys 0m0.030s 00:04:17.220 23:30:52 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.220 23:30:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.220 ************************************ 00:04:17.220 END TEST env_mem_callbacks 00:04:17.220 ************************************ 00:04:17.220 23:30:52 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.220 00:04:17.220 real 0m6.283s 00:04:17.220 user 0m4.342s 00:04:17.220 sys 0m0.984s 00:04:17.220 23:30:52 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.220 23:30:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.220 ************************************ 00:04:17.220 END TEST env 00:04:17.220 ************************************ 00:04:17.220 23:30:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.220 23:30:52 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.220 23:30:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.220 23:30:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.220 23:30:52 -- common/autotest_common.sh@10 -- # set +x 00:04:17.478 ************************************ 00:04:17.478 START TEST rpc 00:04:17.478 ************************************ 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.478 * Looking for test storage... 00:04:17.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.478 23:30:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3659858 00:04:17.478 23:30:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:17.478 23:30:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.478 23:30:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3659858 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@829 -- # '[' -z 3659858 ']' 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.478 23:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.478 [2024-07-15 23:30:52.466243] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:17.478 [2024-07-15 23:30:52.466346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659858 ] 00:04:17.478 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.478 [2024-07-15 23:30:52.524612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.737 [2024-07-15 23:30:52.632336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:17.737 [2024-07-15 23:30:52.632388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3659858' to capture a snapshot of events at runtime. 00:04:17.737 [2024-07-15 23:30:52.632412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:17.737 [2024-07-15 23:30:52.632422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:17.737 [2024-07-15 23:30:52.632433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3659858 for offline analysis/debug. 00:04:17.737 [2024-07-15 23:30:52.632460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.996 23:30:52 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.996 23:30:52 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:17.996 23:30:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.996 23:30:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.996 23:30:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:17.996 23:30:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:17.996 23:30:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.996 23:30:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.996 23:30:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 ************************************ 00:04:17.996 START TEST rpc_integrity 00:04:17.996 ************************************ 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.996 { 00:04:17.996 "name": "Malloc0", 00:04:17.996 "aliases": [ 00:04:17.996 "684cc0ad-19b6-4ca9-abd4-cb0bf5316ad5" 00:04:17.996 ], 00:04:17.996 "product_name": "Malloc disk", 00:04:17.996 "block_size": 512, 00:04:17.996 "num_blocks": 16384, 00:04:17.996 "uuid": "684cc0ad-19b6-4ca9-abd4-cb0bf5316ad5", 00:04:17.996 "assigned_rate_limits": { 00:04:17.996 "rw_ios_per_sec": 0, 00:04:17.996 "rw_mbytes_per_sec": 0, 00:04:17.996 "r_mbytes_per_sec": 0, 00:04:17.996 "w_mbytes_per_sec": 0 00:04:17.996 }, 00:04:17.996 "claimed": false, 00:04:17.996 "zoned": false, 00:04:17.996 "supported_io_types": { 00:04:17.996 "read": true, 00:04:17.996 "write": true, 00:04:17.996 "unmap": true, 00:04:17.996 "flush": true, 00:04:17.996 "reset": true, 00:04:17.996 "nvme_admin": false, 00:04:17.996 "nvme_io": false, 00:04:17.996 "nvme_io_md": false, 00:04:17.996 "write_zeroes": true, 00:04:17.996 "zcopy": true, 00:04:17.996 "get_zone_info": false, 00:04:17.996 "zone_management": false, 00:04:17.996 "zone_append": false, 00:04:17.996 "compare": false, 00:04:17.996 "compare_and_write": false, 00:04:17.996 "abort": true, 00:04:17.996 "seek_hole": false, 00:04:17.996 "seek_data": false, 00:04:17.996 "copy": true, 00:04:17.996 "nvme_iov_md": false 00:04:17.996 }, 00:04:17.996 "memory_domains": [ 00:04:17.996 { 00:04:17.996 "dma_device_id": "system", 00:04:17.996 "dma_device_type": 1 00:04:17.996 }, 00:04:17.996 { 00:04:17.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.996 "dma_device_type": 2 00:04:17.996 } 00:04:17.996 ], 00:04:17.996 "driver_specific": {} 00:04:17.996 } 00:04:17.996 ]' 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 [2024-07-15 23:30:52.995559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:17.996 [2024-07-15 23:30:52.995600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.996 [2024-07-15 23:30:52.995624] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100ad50 00:04:17.996 [2024-07-15 23:30:52.995638] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.996 [2024-07-15 23:30:52.997165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.996 [2024-07-15 23:30:52.997192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.996 Passthru0 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.996 23:30:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.996 23:30:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.996 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.996 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.996 { 00:04:17.996 "name": "Malloc0", 00:04:17.996 "aliases": [ 00:04:17.996 "684cc0ad-19b6-4ca9-abd4-cb0bf5316ad5" 00:04:17.996 ], 00:04:17.996 "product_name": "Malloc disk", 00:04:17.996 "block_size": 512, 00:04:17.996 "num_blocks": 16384, 00:04:17.996 "uuid": "684cc0ad-19b6-4ca9-abd4-cb0bf5316ad5", 00:04:17.996 "assigned_rate_limits": { 00:04:17.996 "rw_ios_per_sec": 0, 00:04:17.996 "rw_mbytes_per_sec": 0, 00:04:17.996 "r_mbytes_per_sec": 0, 00:04:17.996 "w_mbytes_per_sec": 0 00:04:17.996 }, 00:04:17.996 "claimed": true, 00:04:17.996 "claim_type": "exclusive_write", 00:04:17.996 "zoned": false, 00:04:17.996 "supported_io_types": { 00:04:17.996 "read": true, 00:04:17.996 "write": true, 00:04:17.996 "unmap": true, 00:04:17.996 "flush": true, 00:04:17.996 "reset": true, 00:04:17.996 "nvme_admin": false, 00:04:17.996 "nvme_io": false, 00:04:17.997 "nvme_io_md": false, 00:04:17.997 "write_zeroes": true, 00:04:17.997 "zcopy": true, 00:04:17.997 "get_zone_info": false, 00:04:17.997 "zone_management": false, 00:04:17.997 "zone_append": false, 00:04:17.997 "compare": false, 00:04:17.997 "compare_and_write": false, 00:04:17.997 "abort": true, 00:04:17.997 "seek_hole": false, 00:04:17.997 "seek_data": false, 00:04:17.997 "copy": true, 00:04:17.997 "nvme_iov_md": false 00:04:17.997 }, 00:04:17.997 "memory_domains": [ 00:04:17.997 { 00:04:17.997 "dma_device_id": "system", 00:04:17.997 "dma_device_type": 1 00:04:17.997 }, 00:04:17.997 { 00:04:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.997 "dma_device_type": 2 00:04:17.997 } 00:04:17.997 ], 00:04:17.997 "driver_specific": {} 00:04:17.997 }, 00:04:17.997 { 00:04:17.997 "name": "Passthru0", 00:04:17.997 "aliases": [ 00:04:17.997 "c169159d-800a-544c-9b49-6dec3ace8dfa" 00:04:17.997 ], 00:04:17.997 "product_name": "passthru", 00:04:17.997 "block_size": 512, 00:04:17.997 "num_blocks": 16384, 00:04:17.997 "uuid": "c169159d-800a-544c-9b49-6dec3ace8dfa", 00:04:17.997 "assigned_rate_limits": { 00:04:17.997 "rw_ios_per_sec": 0, 00:04:17.997 "rw_mbytes_per_sec": 0, 00:04:17.997 "r_mbytes_per_sec": 0, 00:04:17.997 "w_mbytes_per_sec": 0 00:04:17.997 }, 00:04:17.997 "claimed": false, 00:04:17.997 "zoned": false, 00:04:17.997 "supported_io_types": { 00:04:17.997 "read": true, 00:04:17.997 "write": true, 00:04:17.997 "unmap": true, 00:04:17.997 "flush": true, 00:04:17.997 "reset": true, 00:04:17.997 "nvme_admin": false, 00:04:17.997 "nvme_io": false, 00:04:17.997 "nvme_io_md": false, 00:04:17.997 "write_zeroes": true, 00:04:17.997 "zcopy": true, 00:04:17.997 "get_zone_info": false, 00:04:17.997 "zone_management": false, 00:04:17.997 "zone_append": false, 00:04:17.997 "compare": false, 00:04:17.997 "compare_and_write": false, 00:04:17.997 "abort": true, 00:04:17.997 "seek_hole": false, 00:04:17.997 "seek_data": false, 00:04:17.997 "copy": true, 00:04:17.997 "nvme_iov_md": false 00:04:17.997 }, 00:04:17.997 "memory_domains": [ 00:04:17.997 { 00:04:17.997 "dma_device_id": "system", 00:04:17.997 "dma_device_type": 1 00:04:17.997 }, 00:04:17.997 { 00:04:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.997 "dma_device_type": 2 00:04:17.997 } 00:04:17.997 ], 00:04:17.997 "driver_specific": { 00:04:17.997 "passthru": { 00:04:17.997 "name": "Passthru0", 00:04:17.997 "base_bdev_name": "Malloc0" 00:04:17.997 } 00:04:17.997 } 00:04:17.997 } 00:04:17.997 ]' 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.997 23:30:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.997 00:04:17.997 real 0m0.213s 00:04:17.997 user 0m0.143s 00:04:17.997 sys 0m0.014s 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.997 23:30:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 ************************************ 00:04:17.997 END TEST rpc_integrity 00:04:17.997 ************************************ 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.271 23:30:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 ************************************ 00:04:18.271 START TEST rpc_plugins 00:04:18.271 ************************************ 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.271 { 00:04:18.271 "name": "Malloc1", 00:04:18.271 "aliases": [ 00:04:18.271 "d1100656-b810-4bf6-a001-2eeb1d9a4c9a" 00:04:18.271 ], 00:04:18.271 "product_name": "Malloc disk", 00:04:18.271 "block_size": 4096, 00:04:18.271 "num_blocks": 256, 00:04:18.271 "uuid": "d1100656-b810-4bf6-a001-2eeb1d9a4c9a", 00:04:18.271 "assigned_rate_limits": { 00:04:18.271 "rw_ios_per_sec": 0, 00:04:18.271 "rw_mbytes_per_sec": 0, 00:04:18.271 "r_mbytes_per_sec": 0, 00:04:18.271 "w_mbytes_per_sec": 0 00:04:18.271 }, 00:04:18.271 "claimed": false, 00:04:18.271 "zoned": false, 00:04:18.271 "supported_io_types": { 00:04:18.271 "read": true, 00:04:18.271 "write": true, 00:04:18.271 "unmap": true, 00:04:18.271 "flush": true, 00:04:18.271 "reset": true, 00:04:18.271 "nvme_admin": false, 00:04:18.271 "nvme_io": false, 00:04:18.271 "nvme_io_md": false, 00:04:18.271 "write_zeroes": true, 00:04:18.271 "zcopy": true, 00:04:18.271 "get_zone_info": false, 00:04:18.271 "zone_management": false, 00:04:18.271 "zone_append": false, 00:04:18.271 "compare": false, 00:04:18.271 "compare_and_write": false, 00:04:18.271 "abort": true, 00:04:18.271 "seek_hole": false, 00:04:18.271 "seek_data": false, 00:04:18.271 "copy": true, 00:04:18.271 "nvme_iov_md": false 00:04:18.271 }, 00:04:18.271 "memory_domains": [ 00:04:18.271 { 00:04:18.271 "dma_device_id": "system", 00:04:18.271 "dma_device_type": 1 00:04:18.271 }, 00:04:18.271 { 00:04:18.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.271 "dma_device_type": 2 00:04:18.271 } 00:04:18.271 ], 00:04:18.271 "driver_specific": {} 00:04:18.271 } 00:04:18.271 ]' 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:18.271 23:30:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.271 00:04:18.271 real 0m0.107s 00:04:18.271 user 0m0.069s 00:04:18.271 sys 0m0.008s 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 ************************************ 00:04:18.271 END TEST rpc_plugins 00:04:18.271 ************************************ 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.271 23:30:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.271 23:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 ************************************ 00:04:18.271 START TEST rpc_trace_cmd_test 00:04:18.271 ************************************ 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:18.271 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3659858", 00:04:18.271 "tpoint_group_mask": "0x8", 00:04:18.271 "iscsi_conn": { 00:04:18.271 "mask": "0x2", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "scsi": { 00:04:18.271 "mask": "0x4", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "bdev": { 00:04:18.271 "mask": "0x8", 00:04:18.271 "tpoint_mask": "0xffffffffffffffff" 00:04:18.271 }, 00:04:18.271 "nvmf_rdma": { 00:04:18.271 "mask": "0x10", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "nvmf_tcp": { 00:04:18.271 "mask": "0x20", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "ftl": { 00:04:18.271 "mask": "0x40", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "blobfs": { 00:04:18.271 "mask": "0x80", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "dsa": { 00:04:18.271 "mask": "0x200", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "thread": { 00:04:18.271 "mask": "0x400", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "nvme_pcie": { 00:04:18.271 "mask": "0x800", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "iaa": { 00:04:18.271 "mask": "0x1000", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "nvme_tcp": { 00:04:18.271 "mask": "0x2000", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "bdev_nvme": { 00:04:18.271 "mask": "0x4000", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 }, 00:04:18.271 "sock": { 00:04:18.271 "mask": "0x8000", 00:04:18.271 "tpoint_mask": "0x0" 00:04:18.271 } 00:04:18.271 }' 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.271 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:18.539 00:04:18.539 real 0m0.179s 00:04:18.539 user 0m0.160s 00:04:18.539 sys 0m0.012s 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.539 23:30:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:18.539 ************************************ 00:04:18.539 END TEST rpc_trace_cmd_test 00:04:18.539 ************************************ 00:04:18.539 23:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.539 23:30:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:18.539 23:30:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:18.539 23:30:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:18.539 23:30:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.539 23:30:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.539 23:30:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.539 ************************************ 00:04:18.539 START TEST rpc_daemon_integrity 00:04:18.539 ************************************ 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.539 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.539 { 00:04:18.539 "name": "Malloc2", 00:04:18.539 "aliases": [ 00:04:18.539 "30a3b9de-de26-4a59-8c3e-795fc27ce162" 00:04:18.539 ], 00:04:18.539 "product_name": "Malloc disk", 00:04:18.539 "block_size": 512, 00:04:18.539 "num_blocks": 16384, 00:04:18.539 "uuid": "30a3b9de-de26-4a59-8c3e-795fc27ce162", 00:04:18.539 "assigned_rate_limits": { 00:04:18.539 "rw_ios_per_sec": 0, 00:04:18.539 "rw_mbytes_per_sec": 0, 00:04:18.539 "r_mbytes_per_sec": 0, 00:04:18.539 "w_mbytes_per_sec": 0 00:04:18.539 }, 00:04:18.539 "claimed": false, 00:04:18.539 "zoned": false, 00:04:18.539 "supported_io_types": { 00:04:18.539 "read": true, 00:04:18.539 "write": true, 00:04:18.539 "unmap": true, 00:04:18.539 "flush": true, 00:04:18.539 "reset": true, 00:04:18.539 "nvme_admin": false, 00:04:18.539 "nvme_io": false, 00:04:18.539 "nvme_io_md": false, 00:04:18.539 "write_zeroes": true, 00:04:18.539 "zcopy": true, 00:04:18.539 "get_zone_info": false, 00:04:18.539 "zone_management": false, 00:04:18.539 "zone_append": false, 00:04:18.539 "compare": false, 00:04:18.540 "compare_and_write": false, 00:04:18.540 "abort": true, 00:04:18.540 "seek_hole": false, 00:04:18.540 "seek_data": false, 00:04:18.540 "copy": true, 00:04:18.540 "nvme_iov_md": false 00:04:18.540 }, 00:04:18.540 "memory_domains": [ 00:04:18.540 { 00:04:18.540 "dma_device_id": "system", 00:04:18.540 "dma_device_type": 1 00:04:18.540 }, 00:04:18.540 { 00:04:18.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.540 "dma_device_type": 2 00:04:18.540 } 00:04:18.540 ], 00:04:18.540 "driver_specific": {} 00:04:18.540 } 00:04:18.540 ]' 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.540 [2024-07-15 23:30:53.633406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:18.540 [2024-07-15 23:30:53.633444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.540 [2024-07-15 23:30:53.633466] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100a980 00:04:18.540 [2024-07-15 23:30:53.633480] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.540 [2024-07-15 23:30:53.634596] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.540 [2024-07-15 23:30:53.634620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.540 Passthru0 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.540 { 00:04:18.540 "name": "Malloc2", 00:04:18.540 "aliases": [ 00:04:18.540 "30a3b9de-de26-4a59-8c3e-795fc27ce162" 00:04:18.540 ], 00:04:18.540 "product_name": "Malloc disk", 00:04:18.540 "block_size": 512, 00:04:18.540 "num_blocks": 16384, 00:04:18.540 "uuid": "30a3b9de-de26-4a59-8c3e-795fc27ce162", 00:04:18.540 "assigned_rate_limits": { 00:04:18.540 "rw_ios_per_sec": 0, 00:04:18.540 "rw_mbytes_per_sec": 0, 00:04:18.540 "r_mbytes_per_sec": 0, 00:04:18.540 "w_mbytes_per_sec": 0 00:04:18.540 }, 00:04:18.540 "claimed": true, 00:04:18.540 "claim_type": "exclusive_write", 00:04:18.540 "zoned": false, 00:04:18.540 "supported_io_types": { 00:04:18.540 "read": true, 00:04:18.540 "write": true, 00:04:18.540 "unmap": true, 00:04:18.540 "flush": true, 00:04:18.540 "reset": true, 00:04:18.540 "nvme_admin": false, 00:04:18.540 "nvme_io": false, 00:04:18.540 "nvme_io_md": false, 00:04:18.540 "write_zeroes": true, 00:04:18.540 "zcopy": true, 00:04:18.540 "get_zone_info": false, 00:04:18.540 "zone_management": false, 00:04:18.540 "zone_append": false, 00:04:18.540 "compare": false, 00:04:18.540 "compare_and_write": false, 00:04:18.540 "abort": true, 00:04:18.540 "seek_hole": false, 00:04:18.540 "seek_data": false, 00:04:18.540 "copy": true, 00:04:18.540 "nvme_iov_md": false 00:04:18.540 }, 00:04:18.540 "memory_domains": [ 00:04:18.540 { 00:04:18.540 "dma_device_id": "system", 00:04:18.540 "dma_device_type": 1 00:04:18.540 }, 00:04:18.540 { 00:04:18.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.540 "dma_device_type": 2 00:04:18.540 } 00:04:18.540 ], 00:04:18.540 "driver_specific": {} 00:04:18.540 }, 00:04:18.540 { 00:04:18.540 "name": "Passthru0", 00:04:18.540 "aliases": [ 00:04:18.540 "eba8397f-0fc6-525c-8c9c-f111f13f4813" 00:04:18.540 ], 00:04:18.540 "product_name": "passthru", 00:04:18.540 "block_size": 512, 00:04:18.540 "num_blocks": 16384, 00:04:18.540 "uuid": "eba8397f-0fc6-525c-8c9c-f111f13f4813", 00:04:18.540 "assigned_rate_limits": { 00:04:18.540 "rw_ios_per_sec": 0, 00:04:18.540 "rw_mbytes_per_sec": 0, 00:04:18.540 "r_mbytes_per_sec": 0, 00:04:18.540 "w_mbytes_per_sec": 0 00:04:18.540 }, 00:04:18.540 "claimed": false, 00:04:18.540 "zoned": false, 00:04:18.540 "supported_io_types": { 00:04:18.540 "read": true, 00:04:18.540 "write": true, 00:04:18.540 "unmap": true, 00:04:18.540 "flush": true, 00:04:18.540 "reset": true, 00:04:18.540 "nvme_admin": false, 00:04:18.540 "nvme_io": false, 00:04:18.540 "nvme_io_md": false, 00:04:18.540 "write_zeroes": true, 00:04:18.540 "zcopy": true, 00:04:18.540 "get_zone_info": false, 00:04:18.540 "zone_management": false, 00:04:18.540 "zone_append": false, 00:04:18.540 "compare": false, 00:04:18.540 "compare_and_write": false, 00:04:18.540 "abort": true, 00:04:18.540 "seek_hole": false, 00:04:18.540 "seek_data": false, 00:04:18.540 "copy": true, 00:04:18.540 "nvme_iov_md": false 00:04:18.540 }, 00:04:18.540 "memory_domains": [ 00:04:18.540 { 00:04:18.540 "dma_device_id": "system", 00:04:18.540 "dma_device_type": 1 00:04:18.540 }, 00:04:18.540 { 00:04:18.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.540 "dma_device_type": 2 00:04:18.540 } 00:04:18.540 ], 00:04:18.540 "driver_specific": { 00:04:18.540 "passthru": { 00:04:18.540 "name": "Passthru0", 00:04:18.540 "base_bdev_name": "Malloc2" 00:04:18.540 } 00:04:18.540 } 00:04:18.540 } 00:04:18.540 ]' 00:04:18.540 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.796 00:04:18.796 real 0m0.212s 00:04:18.796 user 0m0.135s 00:04:18.796 sys 0m0.021s 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.796 23:30:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.796 ************************************ 00:04:18.796 END TEST rpc_daemon_integrity 00:04:18.796 ************************************ 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.796 23:30:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:18.796 23:30:53 rpc -- rpc/rpc.sh@84 -- # killprocess 3659858 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@948 -- # '[' -z 3659858 ']' 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@952 -- # kill -0 3659858 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@953 -- # uname 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3659858 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3659858' 00:04:18.796 killing process with pid 3659858 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@967 -- # kill 3659858 00:04:18.796 23:30:53 rpc -- common/autotest_common.sh@972 -- # wait 3659858 00:04:19.360 00:04:19.360 real 0m1.840s 00:04:19.360 user 0m2.310s 00:04:19.360 sys 0m0.551s 00:04:19.360 23:30:54 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.360 23:30:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.360 ************************************ 00:04:19.360 END TEST rpc 00:04:19.360 ************************************ 00:04:19.360 23:30:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.360 23:30:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:19.360 23:30:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.360 23:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.360 23:30:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.360 ************************************ 00:04:19.360 START TEST skip_rpc 00:04:19.360 ************************************ 00:04:19.360 23:30:54 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:19.360 * Looking for test storage... 00:04:19.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:19.360 23:30:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.360 23:30:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.360 23:30:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.360 23:30:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.360 23:30:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.360 23:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.360 ************************************ 00:04:19.360 START TEST skip_rpc 00:04:19.360 ************************************ 00:04:19.360 23:30:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:19.360 23:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3660295 00:04:19.360 23:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.360 23:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.360 23:30:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.360 [2024-07-15 23:30:54.386322] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:19.360 [2024-07-15 23:30:54.386396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660295 ] 00:04:19.360 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.360 [2024-07-15 23:30:54.441500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.617 [2024-07-15 23:30:54.543109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3660295 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3660295 ']' 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3660295 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3660295 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3660295' 00:04:24.872 killing process with pid 3660295 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3660295 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3660295 00:04:24.872 00:04:24.872 real 0m5.463s 00:04:24.872 user 0m5.189s 00:04:24.872 sys 0m0.286s 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.872 23:30:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.872 ************************************ 00:04:24.872 END TEST skip_rpc 00:04:24.872 ************************************ 00:04:24.872 23:30:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:24.872 23:30:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.872 23:30:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.872 23:30:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.872 23:30:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.872 ************************************ 00:04:24.872 START TEST skip_rpc_with_json 00:04:24.872 ************************************ 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3660982 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3660982 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3660982 ']' 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.872 23:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.872 [2024-07-15 23:30:59.900738] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:24.872 [2024-07-15 23:30:59.900840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660982 ] 00:04:24.872 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.872 [2024-07-15 23:30:59.957649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.130 [2024-07-15 23:31:00.066727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.388 [2024-07-15 23:31:00.315522] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.388 request: 00:04:25.388 { 00:04:25.388 "trtype": "tcp", 00:04:25.388 "method": "nvmf_get_transports", 00:04:25.388 "req_id": 1 00:04:25.388 } 00:04:25.388 Got JSON-RPC error response 00:04:25.388 response: 00:04:25.388 { 00:04:25.388 "code": -19, 00:04:25.388 "message": "No such device" 00:04:25.388 } 00:04:25.388 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 [2024-07-15 23:31:00.323610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.389 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.389 { 00:04:25.389 "subsystems": [ 00:04:25.389 { 00:04:25.389 "subsystem": "vfio_user_target", 00:04:25.389 "config": null 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "keyring", 00:04:25.389 "config": [] 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "iobuf", 00:04:25.389 "config": [ 00:04:25.389 { 00:04:25.389 "method": "iobuf_set_options", 00:04:25.389 "params": { 00:04:25.389 "small_pool_count": 8192, 00:04:25.389 "large_pool_count": 1024, 00:04:25.389 "small_bufsize": 8192, 00:04:25.389 "large_bufsize": 135168 00:04:25.389 } 00:04:25.389 } 00:04:25.389 ] 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "sock", 00:04:25.389 "config": [ 00:04:25.389 { 00:04:25.389 "method": "sock_set_default_impl", 00:04:25.389 "params": { 00:04:25.389 "impl_name": "posix" 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "method": "sock_impl_set_options", 00:04:25.389 "params": { 00:04:25.389 "impl_name": "ssl", 00:04:25.389 "recv_buf_size": 4096, 00:04:25.389 "send_buf_size": 4096, 00:04:25.389 "enable_recv_pipe": true, 00:04:25.389 "enable_quickack": false, 00:04:25.389 "enable_placement_id": 0, 00:04:25.389 "enable_zerocopy_send_server": true, 00:04:25.389 "enable_zerocopy_send_client": false, 00:04:25.389 "zerocopy_threshold": 0, 00:04:25.389 "tls_version": 0, 00:04:25.389 "enable_ktls": false 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "method": "sock_impl_set_options", 00:04:25.389 "params": { 00:04:25.389 "impl_name": "posix", 00:04:25.389 "recv_buf_size": 2097152, 00:04:25.389 "send_buf_size": 2097152, 00:04:25.389 "enable_recv_pipe": true, 00:04:25.389 "enable_quickack": false, 00:04:25.389 "enable_placement_id": 0, 00:04:25.389 "enable_zerocopy_send_server": true, 00:04:25.389 "enable_zerocopy_send_client": false, 00:04:25.389 "zerocopy_threshold": 0, 00:04:25.389 "tls_version": 0, 00:04:25.389 "enable_ktls": false 00:04:25.389 } 00:04:25.389 } 00:04:25.389 ] 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "vmd", 00:04:25.389 "config": [] 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "accel", 00:04:25.389 "config": [ 00:04:25.389 { 00:04:25.389 "method": "accel_set_options", 00:04:25.389 "params": { 00:04:25.389 "small_cache_size": 128, 00:04:25.389 "large_cache_size": 16, 00:04:25.389 "task_count": 2048, 00:04:25.389 "sequence_count": 2048, 00:04:25.389 "buf_count": 2048 00:04:25.389 } 00:04:25.389 } 00:04:25.389 ] 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "subsystem": "bdev", 00:04:25.389 "config": [ 00:04:25.389 { 00:04:25.389 "method": "bdev_set_options", 00:04:25.389 "params": { 00:04:25.389 "bdev_io_pool_size": 65535, 00:04:25.389 "bdev_io_cache_size": 256, 00:04:25.389 "bdev_auto_examine": true, 00:04:25.389 "iobuf_small_cache_size": 128, 00:04:25.389 "iobuf_large_cache_size": 16 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "method": "bdev_raid_set_options", 00:04:25.389 "params": { 00:04:25.389 "process_window_size_kb": 1024 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "method": "bdev_iscsi_set_options", 00:04:25.389 "params": { 00:04:25.389 "timeout_sec": 30 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.389 "method": "bdev_nvme_set_options", 00:04:25.389 "params": { 00:04:25.389 "action_on_timeout": "none", 00:04:25.389 "timeout_us": 0, 00:04:25.389 "timeout_admin_us": 0, 00:04:25.389 "keep_alive_timeout_ms": 10000, 00:04:25.389 "arbitration_burst": 0, 00:04:25.389 "low_priority_weight": 0, 00:04:25.389 "medium_priority_weight": 0, 00:04:25.389 "high_priority_weight": 0, 00:04:25.389 "nvme_adminq_poll_period_us": 10000, 00:04:25.389 "nvme_ioq_poll_period_us": 0, 00:04:25.389 "io_queue_requests": 0, 00:04:25.389 "delay_cmd_submit": true, 00:04:25.389 "transport_retry_count": 4, 00:04:25.389 "bdev_retry_count": 3, 00:04:25.389 "transport_ack_timeout": 0, 00:04:25.389 "ctrlr_loss_timeout_sec": 0, 00:04:25.389 "reconnect_delay_sec": 0, 00:04:25.389 "fast_io_fail_timeout_sec": 0, 00:04:25.389 "disable_auto_failback": false, 00:04:25.389 "generate_uuids": false, 00:04:25.389 "transport_tos": 0, 00:04:25.389 "nvme_error_stat": false, 00:04:25.389 "rdma_srq_size": 0, 00:04:25.389 "io_path_stat": false, 00:04:25.389 "allow_accel_sequence": false, 00:04:25.389 "rdma_max_cq_size": 0, 00:04:25.389 "rdma_cm_event_timeout_ms": 0, 00:04:25.389 "dhchap_digests": [ 00:04:25.389 "sha256", 00:04:25.389 "sha384", 00:04:25.389 "sha512" 00:04:25.389 ], 00:04:25.389 "dhchap_dhgroups": [ 00:04:25.389 "null", 00:04:25.389 "ffdhe2048", 00:04:25.389 "ffdhe3072", 00:04:25.389 "ffdhe4096", 00:04:25.389 "ffdhe6144", 00:04:25.389 "ffdhe8192" 00:04:25.389 ] 00:04:25.389 } 00:04:25.389 }, 00:04:25.389 { 00:04:25.390 "method": "bdev_nvme_set_hotplug", 00:04:25.390 "params": { 00:04:25.390 "period_us": 100000, 00:04:25.390 "enable": false 00:04:25.390 } 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "method": "bdev_wait_for_examine" 00:04:25.390 } 00:04:25.390 ] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "scsi", 00:04:25.390 "config": null 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "scheduler", 00:04:25.390 "config": [ 00:04:25.390 { 00:04:25.390 "method": "framework_set_scheduler", 00:04:25.390 "params": { 00:04:25.390 "name": "static" 00:04:25.390 } 00:04:25.390 } 00:04:25.390 ] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "vhost_scsi", 00:04:25.390 "config": [] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "vhost_blk", 00:04:25.390 "config": [] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "ublk", 00:04:25.390 "config": [] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "nbd", 00:04:25.390 "config": [] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "nvmf", 00:04:25.390 "config": [ 00:04:25.390 { 00:04:25.390 "method": "nvmf_set_config", 00:04:25.390 "params": { 00:04:25.390 "discovery_filter": "match_any", 00:04:25.390 "admin_cmd_passthru": { 00:04:25.390 "identify_ctrlr": false 00:04:25.390 } 00:04:25.390 } 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "method": "nvmf_set_max_subsystems", 00:04:25.390 "params": { 00:04:25.390 "max_subsystems": 1024 00:04:25.390 } 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "method": "nvmf_set_crdt", 00:04:25.390 "params": { 00:04:25.390 "crdt1": 0, 00:04:25.390 "crdt2": 0, 00:04:25.390 "crdt3": 0 00:04:25.390 } 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "method": "nvmf_create_transport", 00:04:25.390 "params": { 00:04:25.390 "trtype": "TCP", 00:04:25.390 "max_queue_depth": 128, 00:04:25.390 "max_io_qpairs_per_ctrlr": 127, 00:04:25.390 "in_capsule_data_size": 4096, 00:04:25.390 "max_io_size": 131072, 00:04:25.390 "io_unit_size": 131072, 00:04:25.390 "max_aq_depth": 128, 00:04:25.390 "num_shared_buffers": 511, 00:04:25.390 "buf_cache_size": 4294967295, 00:04:25.390 "dif_insert_or_strip": false, 00:04:25.390 "zcopy": false, 00:04:25.390 "c2h_success": true, 00:04:25.390 "sock_priority": 0, 00:04:25.390 "abort_timeout_sec": 1, 00:04:25.390 "ack_timeout": 0, 00:04:25.390 "data_wr_pool_size": 0 00:04:25.390 } 00:04:25.390 } 00:04:25.390 ] 00:04:25.390 }, 00:04:25.390 { 00:04:25.390 "subsystem": "iscsi", 00:04:25.390 "config": [ 00:04:25.390 { 00:04:25.390 "method": "iscsi_set_options", 00:04:25.390 "params": { 00:04:25.390 "node_base": "iqn.2016-06.io.spdk", 00:04:25.390 "max_sessions": 128, 00:04:25.390 "max_connections_per_session": 2, 00:04:25.390 "max_queue_depth": 64, 00:04:25.390 "default_time2wait": 2, 00:04:25.390 "default_time2retain": 20, 00:04:25.390 "first_burst_length": 8192, 00:04:25.390 "immediate_data": true, 00:04:25.390 "allow_duplicated_isid": false, 00:04:25.390 "error_recovery_level": 0, 00:04:25.390 "nop_timeout": 60, 00:04:25.390 "nop_in_interval": 30, 00:04:25.390 "disable_chap": false, 00:04:25.390 "require_chap": false, 00:04:25.390 "mutual_chap": false, 00:04:25.390 "chap_group": 0, 00:04:25.390 "max_large_datain_per_connection": 64, 00:04:25.390 "max_r2t_per_connection": 4, 00:04:25.390 "pdu_pool_size": 36864, 00:04:25.390 "immediate_data_pool_size": 16384, 00:04:25.390 "data_out_pool_size": 2048 00:04:25.390 } 00:04:25.390 } 00:04:25.390 ] 00:04:25.390 } 00:04:25.390 ] 00:04:25.390 } 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3660982 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3660982 ']' 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3660982 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3660982 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3660982' 00:04:25.390 killing process with pid 3660982 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3660982 00:04:25.390 23:31:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3660982 00:04:25.956 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3661122 00:04:25.956 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.956 23:31:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3661122 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3661122 ']' 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3661122 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3661122 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3661122' 00:04:31.266 killing process with pid 3661122 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3661122 00:04:31.266 23:31:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3661122 00:04:31.266 23:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:31.266 23:31:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:31.266 00:04:31.266 real 0m6.536s 00:04:31.266 user 0m6.193s 00:04:31.266 sys 0m0.618s 00:04:31.266 23:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.266 23:31:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.266 ************************************ 00:04:31.266 END TEST skip_rpc_with_json 00:04:31.266 ************************************ 00:04:31.524 23:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:31.524 23:31:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:31.524 23:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.524 23:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.524 23:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.524 ************************************ 00:04:31.524 START TEST skip_rpc_with_delay 00:04:31.524 ************************************ 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.524 [2024-07-15 23:31:06.482513] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:31.524 [2024-07-15 23:31:06.482630] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.524 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:31.525 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.525 00:04:31.525 real 0m0.067s 00:04:31.525 user 0m0.047s 00:04:31.525 sys 0m0.020s 00:04:31.525 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.525 23:31:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 ************************************ 00:04:31.525 END TEST skip_rpc_with_delay 00:04:31.525 ************************************ 00:04:31.525 23:31:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:31.525 23:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:31.525 23:31:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:31.525 23:31:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:31.525 23:31:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.525 23:31:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.525 23:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 ************************************ 00:04:31.525 START TEST exit_on_failed_rpc_init 00:04:31.525 ************************************ 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3661840 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3661840 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3661840 ']' 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.525 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 [2024-07-15 23:31:06.601032] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:31.525 [2024-07-15 23:31:06.601135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661840 ] 00:04:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.783 [2024-07-15 23:31:06.658323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.783 [2024-07-15 23:31:06.759907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.042 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.042 23:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:32.042 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.042 [2024-07-15 23:31:07.058466] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:32.042 [2024-07-15 23:31:07.058545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661850 ] 00:04:32.042 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.042 [2024-07-15 23:31:07.115732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.300 [2024-07-15 23:31:07.226921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.300 [2024-07-15 23:31:07.227067] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:32.300 [2024-07-15 23:31:07.227090] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:32.300 [2024-07-15 23:31:07.227103] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3661840 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3661840 ']' 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3661840 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3661840 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3661840' 00:04:32.300 killing process with pid 3661840 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3661840 00:04:32.300 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3661840 00:04:32.868 00:04:32.868 real 0m1.256s 00:04:32.868 user 0m1.416s 00:04:32.868 sys 0m0.443s 00:04:32.868 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.868 23:31:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.868 ************************************ 00:04:32.868 END TEST exit_on_failed_rpc_init 00:04:32.868 ************************************ 00:04:32.868 23:31:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.868 23:31:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.868 00:04:32.868 real 0m13.577s 00:04:32.868 user 0m12.945s 00:04:32.868 sys 0m1.539s 00:04:32.868 23:31:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.868 23:31:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.868 ************************************ 00:04:32.868 END TEST skip_rpc 00:04:32.868 ************************************ 00:04:32.868 23:31:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.868 23:31:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:32.868 23:31:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.868 23:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.868 23:31:07 -- common/autotest_common.sh@10 -- # set +x 00:04:32.868 ************************************ 00:04:32.868 START TEST rpc_client 00:04:32.868 ************************************ 00:04:32.868 23:31:07 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:32.868 * Looking for test storage... 00:04:32.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:32.869 23:31:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:32.869 OK 00:04:32.869 23:31:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.869 00:04:32.869 real 0m0.070s 00:04:32.869 user 0m0.030s 00:04:32.869 sys 0m0.044s 00:04:32.869 23:31:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.869 23:31:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.869 ************************************ 00:04:32.869 END TEST rpc_client 00:04:32.869 ************************************ 00:04:32.869 23:31:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.869 23:31:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:32.869 23:31:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.869 23:31:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.869 23:31:07 -- common/autotest_common.sh@10 -- # set +x 00:04:33.162 ************************************ 00:04:33.162 START TEST json_config 00:04:33.162 ************************************ 00:04:33.162 23:31:07 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:33.162 23:31:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.162 23:31:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.163 23:31:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.163 23:31:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.163 23:31:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.163 23:31:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.163 23:31:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.163 23:31:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.163 23:31:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:33.163 23:31:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@47 -- # : 0 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.163 23:31:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:33.163 INFO: JSON configuration test init 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.163 23:31:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:33.163 23:31:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.163 23:31:08 json_config -- json_config/common.sh@10 -- # shift 00:04:33.163 23:31:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.163 23:31:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.163 23:31:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.163 23:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.163 23:31:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.163 23:31:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3662094 00:04:33.163 23:31:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:33.163 23:31:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.163 Waiting for target to run... 00:04:33.163 23:31:08 json_config -- json_config/common.sh@25 -- # waitforlisten 3662094 /var/tmp/spdk_tgt.sock 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 3662094 ']' 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.163 23:31:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.163 [2024-07-15 23:31:08.098880] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:33.163 [2024-07-15 23:31:08.098981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662094 ] 00:04:33.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.421 [2024-07-15 23:31:08.419305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.421 [2024-07-15 23:31:08.499173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:33.987 23:31:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:33.987 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.987 23:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:33.987 23:31:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:33.987 23:31:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:37.279 23:31:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.279 23:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:37.279 23:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:37.279 23:31:12 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:37.537 23:31:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.537 23:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:37.537 23:31:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.537 23:31:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:37.537 23:31:12 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.537 23:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.794 MallocForNvmf0 00:04:37.794 23:31:12 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.794 23:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:38.052 MallocForNvmf1 00:04:38.052 23:31:12 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.052 23:31:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.317 [2024-07-15 23:31:13.180289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.317 23:31:13 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.317 23:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.574 23:31:13 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.574 23:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.574 23:31:13 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.574 23:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.831 23:31:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.831 23:31:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:39.088 [2024-07-15 23:31:14.147364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:39.088 23:31:14 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:39.088 23:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.088 23:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.088 23:31:14 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:39.088 23:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.088 23:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.088 23:31:14 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:39.088 23:31:14 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.088 23:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.345 MallocBdevForConfigChangeCheck 00:04:39.345 23:31:14 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:39.346 23:31:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.346 23:31:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.602 23:31:14 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:39.602 23:31:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.860 23:31:14 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:39.860 INFO: shutting down applications... 00:04:39.860 23:31:14 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:39.860 23:31:14 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:39.860 23:31:14 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:39.860 23:31:14 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.759 Calling clear_iscsi_subsystem 00:04:41.759 Calling clear_nvmf_subsystem 00:04:41.759 Calling clear_nbd_subsystem 00:04:41.759 Calling clear_ublk_subsystem 00:04:41.759 Calling clear_vhost_blk_subsystem 00:04:41.759 Calling clear_vhost_scsi_subsystem 00:04:41.759 Calling clear_bdev_subsystem 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@345 -- # break 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:41.759 23:31:16 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:41.759 23:31:16 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.759 23:31:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.759 23:31:16 json_config -- json_config/common.sh@35 -- # [[ -n 3662094 ]] 00:04:41.759 23:31:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3662094 00:04:41.759 23:31:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.759 23:31:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.759 23:31:16 json_config -- json_config/common.sh@41 -- # kill -0 3662094 00:04:41.759 23:31:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.328 23:31:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.328 23:31:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.328 23:31:17 json_config -- json_config/common.sh@41 -- # kill -0 3662094 00:04:42.328 23:31:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.328 23:31:17 json_config -- json_config/common.sh@43 -- # break 00:04:42.328 23:31:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.328 23:31:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.328 SPDK target shutdown done 00:04:42.328 23:31:17 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:42.328 INFO: relaunching applications... 00:04:42.328 23:31:17 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.328 23:31:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.328 23:31:17 json_config -- json_config/common.sh@10 -- # shift 00:04:42.328 23:31:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.328 23:31:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.328 23:31:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.328 23:31:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.328 23:31:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.328 23:31:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3663289 00:04:42.328 23:31:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.328 23:31:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.328 Waiting for target to run... 00:04:42.328 23:31:17 json_config -- json_config/common.sh@25 -- # waitforlisten 3663289 /var/tmp/spdk_tgt.sock 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@829 -- # '[' -z 3663289 ']' 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.328 23:31:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.328 [2024-07-15 23:31:17.350920] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:42.328 [2024-07-15 23:31:17.351032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663289 ] 00:04:42.328 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.894 [2024-07-15 23:31:17.889525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.894 [2024-07-15 23:31:17.975934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.174 [2024-07-15 23:31:21.007359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.174 [2024-07-15 23:31:21.039771] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.739 23:31:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.739 23:31:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:46.739 23:31:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.739 00:04:46.739 23:31:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:46.739 23:31:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:46.739 INFO: Checking if target configuration is the same... 00:04:46.739 23:31:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.739 23:31:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:46.739 23:31:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.739 + '[' 2 -ne 2 ']' 00:04:46.739 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.739 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:46.739 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.739 +++ basename /dev/fd/62 00:04:46.739 ++ mktemp /tmp/62.XXX 00:04:46.739 + tmp_file_1=/tmp/62.yjy 00:04:46.739 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.739 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.739 + tmp_file_2=/tmp/spdk_tgt_config.json.vj5 00:04:46.739 + ret=0 00:04:46.739 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.403 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.403 + diff -u /tmp/62.yjy /tmp/spdk_tgt_config.json.vj5 00:04:47.403 + echo 'INFO: JSON config files are the same' 00:04:47.403 INFO: JSON config files are the same 00:04:47.403 + rm /tmp/62.yjy /tmp/spdk_tgt_config.json.vj5 00:04:47.403 + exit 0 00:04:47.403 23:31:22 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:47.403 23:31:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:47.403 INFO: changing configuration and checking if this can be detected... 00:04:47.403 23:31:22 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.403 23:31:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.403 23:31:22 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.403 23:31:22 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:47.403 23:31:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.403 + '[' 2 -ne 2 ']' 00:04:47.403 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.403 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.403 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.403 +++ basename /dev/fd/62 00:04:47.403 ++ mktemp /tmp/62.XXX 00:04:47.403 + tmp_file_1=/tmp/62.hw9 00:04:47.403 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.403 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.403 + tmp_file_2=/tmp/spdk_tgt_config.json.zWW 00:04:47.403 + ret=0 00:04:47.403 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.966 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.966 + diff -u /tmp/62.hw9 /tmp/spdk_tgt_config.json.zWW 00:04:47.966 + ret=1 00:04:47.966 + echo '=== Start of file: /tmp/62.hw9 ===' 00:04:47.966 + cat /tmp/62.hw9 00:04:47.966 + echo '=== End of file: /tmp/62.hw9 ===' 00:04:47.966 + echo '' 00:04:47.966 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zWW ===' 00:04:47.966 + cat /tmp/spdk_tgt_config.json.zWW 00:04:47.966 + echo '=== End of file: /tmp/spdk_tgt_config.json.zWW ===' 00:04:47.966 + echo '' 00:04:47.966 + rm /tmp/62.hw9 /tmp/spdk_tgt_config.json.zWW 00:04:47.966 + exit 1 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:47.966 INFO: configuration change detected. 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 3663289 ]] 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.966 23:31:22 json_config -- json_config/json_config.sh@323 -- # killprocess 3663289 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@948 -- # '[' -z 3663289 ']' 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@952 -- # kill -0 3663289 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@953 -- # uname 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3663289 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3663289' 00:04:47.966 killing process with pid 3663289 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@967 -- # kill 3663289 00:04:47.966 23:31:22 json_config -- common/autotest_common.sh@972 -- # wait 3663289 00:04:49.860 23:31:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.860 23:31:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:49.860 23:31:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.860 23:31:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 23:31:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:49.860 23:31:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:49.860 INFO: Success 00:04:49.860 00:04:49.860 real 0m16.517s 00:04:49.860 user 0m18.387s 00:04:49.860 sys 0m2.068s 00:04:49.860 23:31:24 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.860 23:31:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 ************************************ 00:04:49.860 END TEST json_config 00:04:49.860 ************************************ 00:04:49.860 23:31:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.860 23:31:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:49.860 23:31:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.860 23:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.860 23:31:24 -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 ************************************ 00:04:49.860 START TEST json_config_extra_key 00:04:49.860 ************************************ 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:49.860 23:31:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.860 23:31:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.860 23:31:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.860 23:31:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.860 23:31:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.860 23:31:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.860 23:31:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.860 23:31:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.860 23:31:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.860 INFO: launching applications... 00:04:49.860 23:31:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3664328 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.860 Waiting for target to run... 00:04:49.860 23:31:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3664328 /var/tmp/spdk_tgt.sock 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3664328 ']' 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.860 23:31:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.860 [2024-07-15 23:31:24.671473] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:49.861 [2024-07-15 23:31:24.671568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664328 ] 00:04:49.861 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.119 [2024-07-15 23:31:25.197793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.380 [2024-07-15 23:31:25.292347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.637 23:31:25 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.637 23:31:25 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.637 00:04:50.637 23:31:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.637 INFO: shutting down applications... 00:04:50.637 23:31:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3664328 ]] 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3664328 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.637 23:31:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3664328 00:04:50.638 23:31:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3664328 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.200 23:31:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.200 SPDK target shutdown done 00:04:51.200 23:31:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.200 Success 00:04:51.200 00:04:51.200 real 0m1.571s 00:04:51.200 user 0m1.368s 00:04:51.200 sys 0m0.640s 00:04:51.200 23:31:26 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.200 23:31:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.200 ************************************ 00:04:51.200 END TEST json_config_extra_key 00:04:51.200 ************************************ 00:04:51.200 23:31:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.200 23:31:26 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.200 23:31:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.200 23:31:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.200 23:31:26 -- common/autotest_common.sh@10 -- # set +x 00:04:51.200 ************************************ 00:04:51.200 START TEST alias_rpc 00:04:51.200 ************************************ 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.201 * Looking for test storage... 00:04:51.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:51.201 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.201 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3664512 00:04:51.201 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.201 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3664512 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3664512 ']' 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.201 23:31:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.201 [2024-07-15 23:31:26.289510] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:51.201 [2024-07-15 23:31:26.289608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664512 ] 00:04:51.201 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.457 [2024-07-15 23:31:26.348766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.457 [2024-07-15 23:31:26.453964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.715 23:31:26 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.715 23:31:26 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:51.715 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:51.973 23:31:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3664512 00:04:51.973 23:31:26 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3664512 ']' 00:04:51.973 23:31:26 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3664512 00:04:51.973 23:31:26 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:51.973 23:31:26 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.973 23:31:26 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3664512 00:04:51.973 23:31:27 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.973 23:31:27 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.973 23:31:27 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3664512' 00:04:51.973 killing process with pid 3664512 00:04:51.973 23:31:27 alias_rpc -- common/autotest_common.sh@967 -- # kill 3664512 00:04:51.973 23:31:27 alias_rpc -- common/autotest_common.sh@972 -- # wait 3664512 00:04:52.540 00:04:52.540 real 0m1.262s 00:04:52.540 user 0m1.330s 00:04:52.540 sys 0m0.438s 00:04:52.540 23:31:27 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.540 23:31:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.540 ************************************ 00:04:52.540 END TEST alias_rpc 00:04:52.540 ************************************ 00:04:52.540 23:31:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.540 23:31:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:52.540 23:31:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:52.540 23:31:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.540 23:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.540 23:31:27 -- common/autotest_common.sh@10 -- # set +x 00:04:52.540 ************************************ 00:04:52.540 START TEST spdkcli_tcp 00:04:52.540 ************************************ 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:52.540 * Looking for test storage... 00:04:52.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3664704 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:52.540 23:31:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3664704 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3664704 ']' 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.540 23:31:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.540 [2024-07-15 23:31:27.609612] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:52.540 [2024-07-15 23:31:27.609714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664704 ] 00:04:52.540 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.799 [2024-07-15 23:31:27.667583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.799 [2024-07-15 23:31:27.776742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.799 [2024-07-15 23:31:27.776746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.059 23:31:28 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.059 23:31:28 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:53.059 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3664832 00:04:53.059 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.059 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:53.317 [ 00:04:53.317 "bdev_malloc_delete", 00:04:53.317 "bdev_malloc_create", 00:04:53.317 "bdev_null_resize", 00:04:53.317 "bdev_null_delete", 00:04:53.317 "bdev_null_create", 00:04:53.317 "bdev_nvme_cuse_unregister", 00:04:53.317 "bdev_nvme_cuse_register", 00:04:53.317 "bdev_opal_new_user", 00:04:53.317 "bdev_opal_set_lock_state", 00:04:53.317 "bdev_opal_delete", 00:04:53.317 "bdev_opal_get_info", 00:04:53.317 "bdev_opal_create", 00:04:53.317 "bdev_nvme_opal_revert", 00:04:53.317 "bdev_nvme_opal_init", 00:04:53.317 "bdev_nvme_send_cmd", 00:04:53.317 "bdev_nvme_get_path_iostat", 00:04:53.317 "bdev_nvme_get_mdns_discovery_info", 00:04:53.317 "bdev_nvme_stop_mdns_discovery", 00:04:53.317 "bdev_nvme_start_mdns_discovery", 00:04:53.317 "bdev_nvme_set_multipath_policy", 00:04:53.317 "bdev_nvme_set_preferred_path", 00:04:53.317 "bdev_nvme_get_io_paths", 00:04:53.317 "bdev_nvme_remove_error_injection", 00:04:53.317 "bdev_nvme_add_error_injection", 00:04:53.317 "bdev_nvme_get_discovery_info", 00:04:53.317 "bdev_nvme_stop_discovery", 00:04:53.317 "bdev_nvme_start_discovery", 00:04:53.317 "bdev_nvme_get_controller_health_info", 00:04:53.317 "bdev_nvme_disable_controller", 00:04:53.317 "bdev_nvme_enable_controller", 00:04:53.317 "bdev_nvme_reset_controller", 00:04:53.317 "bdev_nvme_get_transport_statistics", 00:04:53.317 "bdev_nvme_apply_firmware", 00:04:53.317 "bdev_nvme_detach_controller", 00:04:53.317 "bdev_nvme_get_controllers", 00:04:53.317 "bdev_nvme_attach_controller", 00:04:53.317 "bdev_nvme_set_hotplug", 00:04:53.317 "bdev_nvme_set_options", 00:04:53.317 "bdev_passthru_delete", 00:04:53.317 "bdev_passthru_create", 00:04:53.317 "bdev_lvol_set_parent_bdev", 00:04:53.317 "bdev_lvol_set_parent", 00:04:53.317 "bdev_lvol_check_shallow_copy", 00:04:53.317 "bdev_lvol_start_shallow_copy", 00:04:53.317 "bdev_lvol_grow_lvstore", 00:04:53.317 "bdev_lvol_get_lvols", 00:04:53.317 "bdev_lvol_get_lvstores", 00:04:53.317 "bdev_lvol_delete", 00:04:53.317 "bdev_lvol_set_read_only", 00:04:53.317 "bdev_lvol_resize", 00:04:53.317 "bdev_lvol_decouple_parent", 00:04:53.317 "bdev_lvol_inflate", 00:04:53.317 "bdev_lvol_rename", 00:04:53.317 "bdev_lvol_clone_bdev", 00:04:53.317 "bdev_lvol_clone", 00:04:53.317 "bdev_lvol_snapshot", 00:04:53.317 "bdev_lvol_create", 00:04:53.317 "bdev_lvol_delete_lvstore", 00:04:53.317 "bdev_lvol_rename_lvstore", 00:04:53.317 "bdev_lvol_create_lvstore", 00:04:53.317 "bdev_raid_set_options", 00:04:53.317 "bdev_raid_remove_base_bdev", 00:04:53.317 "bdev_raid_add_base_bdev", 00:04:53.317 "bdev_raid_delete", 00:04:53.317 "bdev_raid_create", 00:04:53.317 "bdev_raid_get_bdevs", 00:04:53.317 "bdev_error_inject_error", 00:04:53.317 "bdev_error_delete", 00:04:53.317 "bdev_error_create", 00:04:53.317 "bdev_split_delete", 00:04:53.317 "bdev_split_create", 00:04:53.317 "bdev_delay_delete", 00:04:53.317 "bdev_delay_create", 00:04:53.317 "bdev_delay_update_latency", 00:04:53.317 "bdev_zone_block_delete", 00:04:53.317 "bdev_zone_block_create", 00:04:53.317 "blobfs_create", 00:04:53.318 "blobfs_detect", 00:04:53.318 "blobfs_set_cache_size", 00:04:53.318 "bdev_aio_delete", 00:04:53.318 "bdev_aio_rescan", 00:04:53.318 "bdev_aio_create", 00:04:53.318 "bdev_ftl_set_property", 00:04:53.318 "bdev_ftl_get_properties", 00:04:53.318 "bdev_ftl_get_stats", 00:04:53.318 "bdev_ftl_unmap", 00:04:53.318 "bdev_ftl_unload", 00:04:53.318 "bdev_ftl_delete", 00:04:53.318 "bdev_ftl_load", 00:04:53.318 "bdev_ftl_create", 00:04:53.318 "bdev_virtio_attach_controller", 00:04:53.318 "bdev_virtio_scsi_get_devices", 00:04:53.318 "bdev_virtio_detach_controller", 00:04:53.318 "bdev_virtio_blk_set_hotplug", 00:04:53.318 "bdev_iscsi_delete", 00:04:53.318 "bdev_iscsi_create", 00:04:53.318 "bdev_iscsi_set_options", 00:04:53.318 "accel_error_inject_error", 00:04:53.318 "ioat_scan_accel_module", 00:04:53.318 "dsa_scan_accel_module", 00:04:53.318 "iaa_scan_accel_module", 00:04:53.318 "vfu_virtio_create_scsi_endpoint", 00:04:53.318 "vfu_virtio_scsi_remove_target", 00:04:53.318 "vfu_virtio_scsi_add_target", 00:04:53.318 "vfu_virtio_create_blk_endpoint", 00:04:53.318 "vfu_virtio_delete_endpoint", 00:04:53.318 "keyring_file_remove_key", 00:04:53.318 "keyring_file_add_key", 00:04:53.318 "keyring_linux_set_options", 00:04:53.318 "iscsi_get_histogram", 00:04:53.318 "iscsi_enable_histogram", 00:04:53.318 "iscsi_set_options", 00:04:53.318 "iscsi_get_auth_groups", 00:04:53.318 "iscsi_auth_group_remove_secret", 00:04:53.318 "iscsi_auth_group_add_secret", 00:04:53.318 "iscsi_delete_auth_group", 00:04:53.318 "iscsi_create_auth_group", 00:04:53.318 "iscsi_set_discovery_auth", 00:04:53.318 "iscsi_get_options", 00:04:53.318 "iscsi_target_node_request_logout", 00:04:53.318 "iscsi_target_node_set_redirect", 00:04:53.318 "iscsi_target_node_set_auth", 00:04:53.318 "iscsi_target_node_add_lun", 00:04:53.318 "iscsi_get_stats", 00:04:53.318 "iscsi_get_connections", 00:04:53.318 "iscsi_portal_group_set_auth", 00:04:53.318 "iscsi_start_portal_group", 00:04:53.318 "iscsi_delete_portal_group", 00:04:53.318 "iscsi_create_portal_group", 00:04:53.318 "iscsi_get_portal_groups", 00:04:53.318 "iscsi_delete_target_node", 00:04:53.318 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.318 "iscsi_target_node_add_pg_ig_maps", 00:04:53.318 "iscsi_create_target_node", 00:04:53.318 "iscsi_get_target_nodes", 00:04:53.318 "iscsi_delete_initiator_group", 00:04:53.318 "iscsi_initiator_group_remove_initiators", 00:04:53.318 "iscsi_initiator_group_add_initiators", 00:04:53.318 "iscsi_create_initiator_group", 00:04:53.318 "iscsi_get_initiator_groups", 00:04:53.318 "nvmf_set_crdt", 00:04:53.318 "nvmf_set_config", 00:04:53.318 "nvmf_set_max_subsystems", 00:04:53.318 "nvmf_stop_mdns_prr", 00:04:53.318 "nvmf_publish_mdns_prr", 00:04:53.318 "nvmf_subsystem_get_listeners", 00:04:53.318 "nvmf_subsystem_get_qpairs", 00:04:53.318 "nvmf_subsystem_get_controllers", 00:04:53.318 "nvmf_get_stats", 00:04:53.318 "nvmf_get_transports", 00:04:53.318 "nvmf_create_transport", 00:04:53.318 "nvmf_get_targets", 00:04:53.318 "nvmf_delete_target", 00:04:53.318 "nvmf_create_target", 00:04:53.318 "nvmf_subsystem_allow_any_host", 00:04:53.318 "nvmf_subsystem_remove_host", 00:04:53.318 "nvmf_subsystem_add_host", 00:04:53.318 "nvmf_ns_remove_host", 00:04:53.318 "nvmf_ns_add_host", 00:04:53.318 "nvmf_subsystem_remove_ns", 00:04:53.318 "nvmf_subsystem_add_ns", 00:04:53.318 "nvmf_subsystem_listener_set_ana_state", 00:04:53.318 "nvmf_discovery_get_referrals", 00:04:53.318 "nvmf_discovery_remove_referral", 00:04:53.318 "nvmf_discovery_add_referral", 00:04:53.318 "nvmf_subsystem_remove_listener", 00:04:53.318 "nvmf_subsystem_add_listener", 00:04:53.318 "nvmf_delete_subsystem", 00:04:53.318 "nvmf_create_subsystem", 00:04:53.318 "nvmf_get_subsystems", 00:04:53.318 "env_dpdk_get_mem_stats", 00:04:53.318 "nbd_get_disks", 00:04:53.318 "nbd_stop_disk", 00:04:53.318 "nbd_start_disk", 00:04:53.318 "ublk_recover_disk", 00:04:53.318 "ublk_get_disks", 00:04:53.318 "ublk_stop_disk", 00:04:53.318 "ublk_start_disk", 00:04:53.318 "ublk_destroy_target", 00:04:53.318 "ublk_create_target", 00:04:53.318 "virtio_blk_create_transport", 00:04:53.318 "virtio_blk_get_transports", 00:04:53.318 "vhost_controller_set_coalescing", 00:04:53.318 "vhost_get_controllers", 00:04:53.318 "vhost_delete_controller", 00:04:53.318 "vhost_create_blk_controller", 00:04:53.318 "vhost_scsi_controller_remove_target", 00:04:53.318 "vhost_scsi_controller_add_target", 00:04:53.318 "vhost_start_scsi_controller", 00:04:53.318 "vhost_create_scsi_controller", 00:04:53.318 "thread_set_cpumask", 00:04:53.318 "framework_get_governor", 00:04:53.318 "framework_get_scheduler", 00:04:53.318 "framework_set_scheduler", 00:04:53.318 "framework_get_reactors", 00:04:53.318 "thread_get_io_channels", 00:04:53.318 "thread_get_pollers", 00:04:53.318 "thread_get_stats", 00:04:53.318 "framework_monitor_context_switch", 00:04:53.318 "spdk_kill_instance", 00:04:53.318 "log_enable_timestamps", 00:04:53.318 "log_get_flags", 00:04:53.318 "log_clear_flag", 00:04:53.318 "log_set_flag", 00:04:53.318 "log_get_level", 00:04:53.318 "log_set_level", 00:04:53.318 "log_get_print_level", 00:04:53.318 "log_set_print_level", 00:04:53.318 "framework_enable_cpumask_locks", 00:04:53.318 "framework_disable_cpumask_locks", 00:04:53.318 "framework_wait_init", 00:04:53.318 "framework_start_init", 00:04:53.318 "scsi_get_devices", 00:04:53.318 "bdev_get_histogram", 00:04:53.318 "bdev_enable_histogram", 00:04:53.318 "bdev_set_qos_limit", 00:04:53.318 "bdev_set_qd_sampling_period", 00:04:53.318 "bdev_get_bdevs", 00:04:53.318 "bdev_reset_iostat", 00:04:53.318 "bdev_get_iostat", 00:04:53.318 "bdev_examine", 00:04:53.318 "bdev_wait_for_examine", 00:04:53.318 "bdev_set_options", 00:04:53.318 "notify_get_notifications", 00:04:53.318 "notify_get_types", 00:04:53.318 "accel_get_stats", 00:04:53.318 "accel_set_options", 00:04:53.318 "accel_set_driver", 00:04:53.318 "accel_crypto_key_destroy", 00:04:53.318 "accel_crypto_keys_get", 00:04:53.318 "accel_crypto_key_create", 00:04:53.318 "accel_assign_opc", 00:04:53.318 "accel_get_module_info", 00:04:53.318 "accel_get_opc_assignments", 00:04:53.318 "vmd_rescan", 00:04:53.318 "vmd_remove_device", 00:04:53.318 "vmd_enable", 00:04:53.318 "sock_get_default_impl", 00:04:53.318 "sock_set_default_impl", 00:04:53.318 "sock_impl_set_options", 00:04:53.318 "sock_impl_get_options", 00:04:53.318 "iobuf_get_stats", 00:04:53.318 "iobuf_set_options", 00:04:53.318 "keyring_get_keys", 00:04:53.318 "framework_get_pci_devices", 00:04:53.318 "framework_get_config", 00:04:53.318 "framework_get_subsystems", 00:04:53.318 "vfu_tgt_set_base_path", 00:04:53.318 "trace_get_info", 00:04:53.318 "trace_get_tpoint_group_mask", 00:04:53.318 "trace_disable_tpoint_group", 00:04:53.318 "trace_enable_tpoint_group", 00:04:53.318 "trace_clear_tpoint_mask", 00:04:53.318 "trace_set_tpoint_mask", 00:04:53.318 "spdk_get_version", 00:04:53.318 "rpc_get_methods" 00:04:53.318 ] 00:04:53.318 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.318 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:53.318 23:31:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3664704 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3664704 ']' 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3664704 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3664704 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3664704' 00:04:53.318 killing process with pid 3664704 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3664704 00:04:53.318 23:31:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3664704 00:04:53.893 00:04:53.893 real 0m1.260s 00:04:53.893 user 0m2.228s 00:04:53.893 sys 0m0.427s 00:04:53.894 23:31:28 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.894 23:31:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.894 ************************************ 00:04:53.894 END TEST spdkcli_tcp 00:04:53.894 ************************************ 00:04:53.894 23:31:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.894 23:31:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.894 23:31:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.894 23:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.894 23:31:28 -- common/autotest_common.sh@10 -- # set +x 00:04:53.894 ************************************ 00:04:53.894 START TEST dpdk_mem_utility 00:04:53.894 ************************************ 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.894 * Looking for test storage... 00:04:53.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:53.894 23:31:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.894 23:31:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3664980 00:04:53.894 23:31:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.894 23:31:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3664980 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3664980 ']' 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.894 23:31:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.894 [2024-07-15 23:31:28.919713] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:53.894 [2024-07-15 23:31:28.919802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664980 ] 00:04:53.894 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.894 [2024-07-15 23:31:28.979181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.198 [2024-07-15 23:31:29.088057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.455 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.455 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:54.455 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.455 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.455 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.455 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.456 { 00:04:54.456 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.456 } 00:04:54.456 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.456 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.456 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:54.456 1 heaps totaling size 814.000000 MiB 00:04:54.456 size: 814.000000 MiB heap id: 0 00:04:54.456 end heaps---------- 00:04:54.456 8 mempools totaling size 598.116089 MiB 00:04:54.456 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.456 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.456 size: 84.521057 MiB name: bdev_io_3664980 00:04:54.456 size: 51.011292 MiB name: evtpool_3664980 00:04:54.456 size: 50.003479 MiB name: msgpool_3664980 00:04:54.456 size: 21.763794 MiB name: PDU_Pool 00:04:54.456 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.456 size: 0.026123 MiB name: Session_Pool 00:04:54.456 end mempools------- 00:04:54.456 6 memzones totaling size 4.142822 MiB 00:04:54.456 size: 1.000366 MiB name: RG_ring_0_3664980 00:04:54.456 size: 1.000366 MiB name: RG_ring_1_3664980 00:04:54.456 size: 1.000366 MiB name: RG_ring_4_3664980 00:04:54.456 size: 1.000366 MiB name: RG_ring_5_3664980 00:04:54.456 size: 0.125366 MiB name: RG_ring_2_3664980 00:04:54.456 size: 0.015991 MiB name: RG_ring_3_3664980 00:04:54.456 end memzones------- 00:04:54.456 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.456 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:54.456 list of free elements. size: 12.519348 MiB 00:04:54.456 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:54.456 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:54.456 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:54.456 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:54.456 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:54.456 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:54.456 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:54.456 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:54.456 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:54.456 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:54.456 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:54.456 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:54.456 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:54.456 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:54.456 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:54.456 list of standard malloc elements. size: 199.218079 MiB 00:04:54.456 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:54.456 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:54.456 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:54.456 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:54.456 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:54.456 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:54.456 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:54.456 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:54.456 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:54.456 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:54.456 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:54.456 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:54.456 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:54.456 list of memzone associated elements. size: 602.262573 MiB 00:04:54.456 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:54.456 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.456 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:54.456 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.456 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:54.456 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3664980_0 00:04:54.456 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:54.456 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3664980_0 00:04:54.456 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:54.456 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3664980_0 00:04:54.456 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:54.456 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.456 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:54.456 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.456 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:54.456 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3664980 00:04:54.456 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:54.456 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3664980 00:04:54.456 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:54.456 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3664980 00:04:54.456 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:54.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.456 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:54.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.456 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:54.456 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.456 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:54.456 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.457 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:54.457 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3664980 00:04:54.457 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:54.457 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3664980 00:04:54.457 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:54.457 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3664980 00:04:54.457 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:54.457 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3664980 00:04:54.457 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:54.457 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3664980 00:04:54.457 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:54.457 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.457 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:54.457 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.457 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:54.457 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.457 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:54.457 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3664980 00:04:54.457 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:54.457 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.457 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:54.457 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.457 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:54.457 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3664980 00:04:54.457 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:54.457 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.457 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:54.457 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3664980 00:04:54.457 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:54.457 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3664980 00:04:54.457 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:54.457 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.457 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.457 23:31:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3664980 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3664980 ']' 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3664980 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3664980 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3664980' 00:04:54.457 killing process with pid 3664980 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3664980 00:04:54.457 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3664980 00:04:55.022 00:04:55.022 real 0m1.076s 00:04:55.022 user 0m1.077s 00:04:55.022 sys 0m0.379s 00:04:55.022 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.022 23:31:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.022 ************************************ 00:04:55.022 END TEST dpdk_mem_utility 00:04:55.022 ************************************ 00:04:55.022 23:31:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.022 23:31:29 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:55.022 23:31:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.022 23:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.022 23:31:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.022 ************************************ 00:04:55.022 START TEST event 00:04:55.022 ************************************ 00:04:55.022 23:31:29 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:55.022 * Looking for test storage... 00:04:55.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:55.022 23:31:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:55.022 23:31:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.022 23:31:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.022 23:31:29 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:55.022 23:31:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.022 23:31:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.022 ************************************ 00:04:55.022 START TEST event_perf 00:04:55.022 ************************************ 00:04:55.022 23:31:30 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.022 Running I/O for 1 seconds...[2024-07-15 23:31:30.033716] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:55.022 [2024-07-15 23:31:30.033809] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665195 ] 00:04:55.022 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.022 [2024-07-15 23:31:30.094938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.280 [2024-07-15 23:31:30.201440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.280 [2024-07-15 23:31:30.201498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.280 [2024-07-15 23:31:30.201605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.280 [2024-07-15 23:31:30.201613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.213 Running I/O for 1 seconds... 00:04:56.213 lcore 0: 226487 00:04:56.213 lcore 1: 226486 00:04:56.213 lcore 2: 226485 00:04:56.213 lcore 3: 226486 00:04:56.213 done. 00:04:56.213 00:04:56.213 real 0m1.290s 00:04:56.213 user 0m4.205s 00:04:56.213 sys 0m0.080s 00:04:56.213 23:31:31 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.213 23:31:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.213 ************************************ 00:04:56.213 END TEST event_perf 00:04:56.213 ************************************ 00:04:56.213 23:31:31 event -- common/autotest_common.sh@1142 -- # return 0 00:04:56.213 23:31:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:56.213 23:31:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:56.213 23:31:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.213 23:31:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.471 ************************************ 00:04:56.471 START TEST event_reactor 00:04:56.471 ************************************ 00:04:56.471 23:31:31 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:56.471 [2024-07-15 23:31:31.372889] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:56.471 [2024-07-15 23:31:31.372981] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665374 ] 00:04:56.471 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.471 [2024-07-15 23:31:31.430308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.471 [2024-07-15 23:31:31.534800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.844 test_start 00:04:57.844 oneshot 00:04:57.844 tick 100 00:04:57.844 tick 100 00:04:57.844 tick 250 00:04:57.844 tick 100 00:04:57.844 tick 100 00:04:57.844 tick 100 00:04:57.844 tick 250 00:04:57.844 tick 500 00:04:57.844 tick 100 00:04:57.844 tick 100 00:04:57.844 tick 250 00:04:57.844 tick 100 00:04:57.844 tick 100 00:04:57.844 test_end 00:04:57.844 00:04:57.844 real 0m1.283s 00:04:57.844 user 0m1.198s 00:04:57.844 sys 0m0.080s 00:04:57.844 23:31:32 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.844 23:31:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:57.844 ************************************ 00:04:57.844 END TEST event_reactor 00:04:57.844 ************************************ 00:04:57.844 23:31:32 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.844 23:31:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.844 23:31:32 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:57.844 23:31:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.844 23:31:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.844 ************************************ 00:04:57.844 START TEST event_reactor_perf 00:04:57.844 ************************************ 00:04:57.844 23:31:32 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.844 [2024-07-15 23:31:32.706401] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:57.844 [2024-07-15 23:31:32.706469] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665536 ] 00:04:57.844 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.844 [2024-07-15 23:31:32.765088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.844 [2024-07-15 23:31:32.868067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.216 test_start 00:04:59.216 test_end 00:04:59.216 Performance: 448412 events per second 00:04:59.216 00:04:59.216 real 0m1.285s 00:04:59.216 user 0m1.212s 00:04:59.216 sys 0m0.069s 00:04:59.216 23:31:33 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.216 23:31:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.216 ************************************ 00:04:59.216 END TEST event_reactor_perf 00:04:59.216 ************************************ 00:04:59.216 23:31:33 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.216 23:31:34 event -- event/event.sh@49 -- # uname -s 00:04:59.216 23:31:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:59.216 23:31:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:59.216 23:31:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.216 23:31:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.216 23:31:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.216 ************************************ 00:04:59.216 START TEST event_scheduler 00:04:59.216 ************************************ 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:59.216 * Looking for test storage... 00:04:59.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3665714 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3665714 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3665714 ']' 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.216 [2024-07-15 23:31:34.128841] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:04:59.216 [2024-07-15 23:31:34.128926] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665714 ] 00:04:59.216 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.216 [2024-07-15 23:31:34.185856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.216 [2024-07-15 23:31:34.293907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.216 [2024-07-15 23:31:34.293989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.216 [2024-07-15 23:31:34.294037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.216 [2024-07-15 23:31:34.294040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.216 [2024-07-15 23:31:34.334749] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:59.216 [2024-07-15 23:31:34.334775] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.216 [2024-07-15 23:31:34.334800] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.216 [2024-07-15 23:31:34.334811] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.216 [2024-07-15 23:31:34.334821] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.216 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.216 23:31:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 [2024-07-15 23:31:34.431581] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.475 23:31:34 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.475 23:31:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.475 23:31:34 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.475 23:31:34 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.475 23:31:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 ************************************ 00:04:59.475 START TEST scheduler_create_thread 00:04:59.475 ************************************ 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 2 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 3 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.475 4 00:04:59.475 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 5 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 6 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 7 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 8 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 9 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 10 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.476 23:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 23:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.041 00:05:00.041 real 0m0.590s 00:05:00.041 user 0m0.011s 00:05:00.041 sys 0m0.002s 00:05:00.041 23:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.041 23:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 ************************************ 00:05:00.041 END TEST scheduler_create_thread 00:05:00.041 ************************************ 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:00.041 23:31:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.041 23:31:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3665714 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3665714 ']' 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3665714 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3665714 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:00.041 23:31:35 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:00.042 23:31:35 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3665714' 00:05:00.042 killing process with pid 3665714 00:05:00.042 23:31:35 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3665714 00:05:00.042 23:31:35 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3665714 00:05:00.608 [2024-07-15 23:31:35.527660] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.866 00:05:00.866 real 0m1.744s 00:05:00.866 user 0m2.159s 00:05:00.866 sys 0m0.328s 00:05:00.866 23:31:35 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.866 23:31:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.866 ************************************ 00:05:00.866 END TEST event_scheduler 00:05:00.866 ************************************ 00:05:00.866 23:31:35 event -- common/autotest_common.sh@1142 -- # return 0 00:05:00.866 23:31:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.866 23:31:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.866 23:31:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.866 23:31:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.866 23:31:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.866 ************************************ 00:05:00.866 START TEST app_repeat 00:05:00.866 ************************************ 00:05:00.866 23:31:35 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3666025 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3666025' 00:05:00.866 Process app_repeat pid: 3666025 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.866 spdk_app_start Round 0 00:05:00.866 23:31:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3666025 /var/tmp/spdk-nbd.sock 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3666025 ']' 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.867 23:31:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.867 [2024-07-15 23:31:35.859574] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:00.867 [2024-07-15 23:31:35.859639] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666025 ] 00:05:00.867 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.867 [2024-07-15 23:31:35.917401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.125 [2024-07-15 23:31:36.027524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.125 [2024-07-15 23:31:36.027528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.125 23:31:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.125 23:31:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:01.125 23:31:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.383 Malloc0 00:05:01.383 23:31:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.641 Malloc1 00:05:01.641 23:31:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.642 23:31:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.900 /dev/nbd0 00:05:01.900 23:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.900 23:31:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.900 1+0 records in 00:05:01.900 1+0 records out 00:05:01.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015459 s, 26.5 MB/s 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:01.900 23:31:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:01.900 23:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.900 23:31:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.900 23:31:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.158 /dev/nbd1 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.158 1+0 records in 00:05:02.158 1+0 records out 00:05:02.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214702 s, 19.1 MB/s 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.158 23:31:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.158 23:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.416 { 00:05:02.416 "nbd_device": "/dev/nbd0", 00:05:02.416 "bdev_name": "Malloc0" 00:05:02.416 }, 00:05:02.416 { 00:05:02.416 "nbd_device": "/dev/nbd1", 00:05:02.416 "bdev_name": "Malloc1" 00:05:02.416 } 00:05:02.416 ]' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.416 { 00:05:02.416 "nbd_device": "/dev/nbd0", 00:05:02.416 "bdev_name": "Malloc0" 00:05:02.416 }, 00:05:02.416 { 00:05:02.416 "nbd_device": "/dev/nbd1", 00:05:02.416 "bdev_name": "Malloc1" 00:05:02.416 } 00:05:02.416 ]' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.416 /dev/nbd1' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.416 /dev/nbd1' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.416 256+0 records in 00:05:02.416 256+0 records out 00:05:02.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504191 s, 208 MB/s 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.416 23:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.673 256+0 records in 00:05:02.673 256+0 records out 00:05:02.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216262 s, 48.5 MB/s 00:05:02.673 23:31:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.673 23:31:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.673 256+0 records in 00:05:02.674 256+0 records out 00:05:02.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217677 s, 48.2 MB/s 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.674 23:31:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.932 23:31:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.189 23:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.447 23:31:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.447 23:31:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.704 23:31:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.962 [2024-07-15 23:31:38.952320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.962 [2024-07-15 23:31:39.054851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.962 [2024-07-15 23:31:39.054852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.220 [2024-07-15 23:31:39.110036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.220 [2024-07-15 23:31:39.110110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.750 23:31:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.750 23:31:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.750 spdk_app_start Round 1 00:05:06.750 23:31:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3666025 /var/tmp/spdk-nbd.sock 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3666025 ']' 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.750 23:31:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.008 23:31:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.008 23:31:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.008 23:31:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.266 Malloc0 00:05:07.266 23:31:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.523 Malloc1 00:05:07.523 23:31:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.523 23:31:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.523 23:31:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.523 23:31:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.523 23:31:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.523 23:31:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.524 23:31:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.781 /dev/nbd0 00:05:07.781 23:31:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.781 23:31:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.781 1+0 records in 00:05:07.781 1+0 records out 00:05:07.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185153 s, 22.1 MB/s 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.781 23:31:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.781 23:31:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.781 23:31:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.781 23:31:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.039 /dev/nbd1 00:05:08.039 23:31:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.039 1+0 records in 00:05:08.039 1+0 records out 00:05:08.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174195 s, 23.5 MB/s 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.039 23:31:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.039 23:31:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.300 23:31:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.300 { 00:05:08.300 "nbd_device": "/dev/nbd0", 00:05:08.300 "bdev_name": "Malloc0" 00:05:08.301 }, 00:05:08.301 { 00:05:08.301 "nbd_device": "/dev/nbd1", 00:05:08.301 "bdev_name": "Malloc1" 00:05:08.301 } 00:05:08.301 ]' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.301 { 00:05:08.301 "nbd_device": "/dev/nbd0", 00:05:08.301 "bdev_name": "Malloc0" 00:05:08.301 }, 00:05:08.301 { 00:05:08.301 "nbd_device": "/dev/nbd1", 00:05:08.301 "bdev_name": "Malloc1" 00:05:08.301 } 00:05:08.301 ]' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.301 /dev/nbd1' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.301 /dev/nbd1' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.301 256+0 records in 00:05:08.301 256+0 records out 00:05:08.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501733 s, 209 MB/s 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.301 256+0 records in 00:05:08.301 256+0 records out 00:05:08.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020492 s, 51.2 MB/s 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.301 256+0 records in 00:05:08.301 256+0 records out 00:05:08.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218283 s, 48.0 MB/s 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.301 23:31:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.560 23:31:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.817 23:31:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.074 23:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.074 23:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.074 23:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.332 23:31:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.332 23:31:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.589 23:31:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.848 [2024-07-15 23:31:44.735303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.848 [2024-07-15 23:31:44.836617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.848 [2024-07-15 23:31:44.836621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.848 [2024-07-15 23:31:44.893879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.848 [2024-07-15 23:31:44.893975] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.375 23:31:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.375 23:31:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.375 spdk_app_start Round 2 00:05:12.375 23:31:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3666025 /var/tmp/spdk-nbd.sock 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3666025 ']' 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.375 23:31:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.632 23:31:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.632 23:31:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:12.632 23:31:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.908 Malloc0 00:05:12.908 23:31:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.173 Malloc1 00:05:13.173 23:31:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.173 23:31:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.431 /dev/nbd0 00:05:13.431 23:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.431 23:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.431 1+0 records in 00:05:13.431 1+0 records out 00:05:13.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021386 s, 19.2 MB/s 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.431 23:31:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.431 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.431 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.431 23:31:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.689 /dev/nbd1 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.689 1+0 records in 00:05:13.689 1+0 records out 00:05:13.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168417 s, 24.3 MB/s 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.689 23:31:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.689 23:31:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.947 23:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.947 { 00:05:13.947 "nbd_device": "/dev/nbd0", 00:05:13.947 "bdev_name": "Malloc0" 00:05:13.947 }, 00:05:13.947 { 00:05:13.947 "nbd_device": "/dev/nbd1", 00:05:13.947 "bdev_name": "Malloc1" 00:05:13.947 } 00:05:13.947 ]' 00:05:13.947 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.947 { 00:05:13.947 "nbd_device": "/dev/nbd0", 00:05:13.947 "bdev_name": "Malloc0" 00:05:13.947 }, 00:05:13.947 { 00:05:13.947 "nbd_device": "/dev/nbd1", 00:05:13.947 "bdev_name": "Malloc1" 00:05:13.947 } 00:05:13.947 ]' 00:05:13.947 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.205 /dev/nbd1' 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.205 /dev/nbd1' 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.205 23:31:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.206 256+0 records in 00:05:14.206 256+0 records out 00:05:14.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408743 s, 257 MB/s 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.206 256+0 records in 00:05:14.206 256+0 records out 00:05:14.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221061 s, 47.4 MB/s 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.206 256+0 records in 00:05:14.206 256+0 records out 00:05:14.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234597 s, 44.7 MB/s 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.206 23:31:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.464 23:31:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.721 23:31:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.722 23:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.722 23:31:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.722 23:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.980 23:31:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.980 23:31:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.238 23:31:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.496 [2024-07-15 23:31:50.485232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.496 [2024-07-15 23:31:50.589155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.496 [2024-07-15 23:31:50.589155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.754 [2024-07-15 23:31:50.647635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.754 [2024-07-15 23:31:50.647711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.278 23:31:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3666025 /var/tmp/spdk-nbd.sock 00:05:18.278 23:31:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3666025 ']' 00:05:18.278 23:31:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.278 23:31:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.279 23:31:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.279 23:31:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.279 23:31:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:18.536 23:31:53 event.app_repeat -- event/event.sh@39 -- # killprocess 3666025 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3666025 ']' 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3666025 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3666025 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3666025' 00:05:18.536 killing process with pid 3666025 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3666025 00:05:18.536 23:31:53 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3666025 00:05:18.793 spdk_app_start is called in Round 0. 00:05:18.793 Shutdown signal received, stop current app iteration 00:05:18.793 Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 reinitialization... 00:05:18.793 spdk_app_start is called in Round 1. 00:05:18.793 Shutdown signal received, stop current app iteration 00:05:18.793 Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 reinitialization... 00:05:18.793 spdk_app_start is called in Round 2. 00:05:18.793 Shutdown signal received, stop current app iteration 00:05:18.793 Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 reinitialization... 00:05:18.793 spdk_app_start is called in Round 3. 00:05:18.793 Shutdown signal received, stop current app iteration 00:05:18.793 23:31:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.793 23:31:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.793 00:05:18.793 real 0m17.909s 00:05:18.793 user 0m38.860s 00:05:18.793 sys 0m3.209s 00:05:18.793 23:31:53 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.793 23:31:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.793 ************************************ 00:05:18.793 END TEST app_repeat 00:05:18.793 ************************************ 00:05:18.793 23:31:53 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.793 23:31:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.793 23:31:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.793 23:31:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.793 23:31:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.793 23:31:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.793 ************************************ 00:05:18.793 START TEST cpu_locks 00:05:18.794 ************************************ 00:05:18.794 23:31:53 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.794 * Looking for test storage... 00:05:18.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:18.794 23:31:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.794 23:31:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.794 23:31:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.794 23:31:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.794 23:31:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.794 23:31:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.794 23:31:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.794 ************************************ 00:05:18.794 START TEST default_locks 00:05:18.794 ************************************ 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3668377 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3668377 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3668377 ']' 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.794 23:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.052 [2024-07-15 23:31:53.932296] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:19.052 [2024-07-15 23:31:53.932371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668377 ] 00:05:19.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.052 [2024-07-15 23:31:53.989521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.052 [2024-07-15 23:31:54.096423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.310 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.310 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:19.310 23:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3668377 00:05:19.310 23:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3668377 00:05:19.310 23:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.568 lslocks: write error 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3668377 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3668377 ']' 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3668377 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3668377 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3668377' 00:05:19.568 killing process with pid 3668377 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3668377 00:05:19.568 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3668377 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3668377 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3668377 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:20.135 23:31:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3668377 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3668377 ']' 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3668377) - No such process 00:05:20.135 ERROR: process (pid: 3668377) is no longer running 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.135 00:05:20.135 real 0m1.127s 00:05:20.135 user 0m1.079s 00:05:20.135 sys 0m0.490s 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.135 23:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 ************************************ 00:05:20.135 END TEST default_locks 00:05:20.135 ************************************ 00:05:20.135 23:31:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.135 23:31:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:20.135 23:31:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.135 23:31:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.135 23:31:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 ************************************ 00:05:20.135 START TEST default_locks_via_rpc 00:05:20.135 ************************************ 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3668539 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3668539 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3668539 ']' 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.135 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 [2024-07-15 23:31:55.111316] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:20.135 [2024-07-15 23:31:55.111405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668539 ] 00:05:20.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.135 [2024-07-15 23:31:55.168598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.394 [2024-07-15 23:31:55.279767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.394 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.394 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:20.394 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.394 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.394 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3668539 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3668539 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3668539 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3668539 ']' 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3668539 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.652 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3668539 00:05:20.910 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.910 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.910 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3668539' 00:05:20.910 killing process with pid 3668539 00:05:20.910 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3668539 00:05:20.910 23:31:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3668539 00:05:21.168 00:05:21.168 real 0m1.143s 00:05:21.168 user 0m1.115s 00:05:21.168 sys 0m0.473s 00:05:21.168 23:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.168 23:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.168 ************************************ 00:05:21.168 END TEST default_locks_via_rpc 00:05:21.168 ************************************ 00:05:21.168 23:31:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:21.168 23:31:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:21.168 23:31:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.168 23:31:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.168 23:31:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.168 ************************************ 00:05:21.168 START TEST non_locking_app_on_locked_coremask 00:05:21.168 ************************************ 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3668701 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3668701 /var/tmp/spdk.sock 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3668701 ']' 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.168 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.427 [2024-07-15 23:31:56.309421] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:21.427 [2024-07-15 23:31:56.309503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668701 ] 00:05:21.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.427 [2024-07-15 23:31:56.366399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.427 [2024-07-15 23:31:56.476200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3668714 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3668714 /var/tmp/spdk2.sock 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3668714 ']' 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.686 23:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.686 [2024-07-15 23:31:56.761276] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:21.686 [2024-07-15 23:31:56.761354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668714 ] 00:05:21.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.945 [2024-07-15 23:31:56.843987] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.945 [2024-07-15 23:31:56.844014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.945 [2024-07-15 23:31:57.059469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.879 23:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.879 23:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:22.879 23:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3668701 00:05:22.879 23:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3668701 00:05:22.879 23:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.137 lslocks: write error 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3668701 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3668701 ']' 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3668701 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3668701 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3668701' 00:05:23.137 killing process with pid 3668701 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3668701 00:05:23.137 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3668701 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3668714 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3668714 ']' 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3668714 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3668714 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3668714' 00:05:24.071 killing process with pid 3668714 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3668714 00:05:24.071 23:31:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3668714 00:05:24.331 00:05:24.331 real 0m3.125s 00:05:24.331 user 0m3.294s 00:05:24.331 sys 0m1.002s 00:05:24.331 23:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.331 23:31:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.331 ************************************ 00:05:24.331 END TEST non_locking_app_on_locked_coremask 00:05:24.331 ************************************ 00:05:24.331 23:31:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.331 23:31:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.331 23:31:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.331 23:31:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.331 23:31:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.331 ************************************ 00:05:24.331 START TEST locking_app_on_unlocked_coremask 00:05:24.331 ************************************ 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3669135 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3669135 /var/tmp/spdk.sock 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669135 ']' 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.331 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.590 [2024-07-15 23:31:59.482792] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:24.590 [2024-07-15 23:31:59.482889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669135 ] 00:05:24.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.590 [2024-07-15 23:31:59.539370] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.590 [2024-07-15 23:31:59.539409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.590 [2024-07-15 23:31:59.637138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3669144 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3669144 /var/tmp/spdk2.sock 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669144 ']' 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.863 23:31:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.863 [2024-07-15 23:31:59.927816] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:24.863 [2024-07-15 23:31:59.927912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669144 ] 00:05:24.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.121 [2024-07-15 23:32:00.017132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.121 [2024-07-15 23:32:00.230659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.048 23:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.048 23:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:26.048 23:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3669144 00:05:26.048 23:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3669144 00:05:26.048 23:32:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.304 lslocks: write error 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3669135 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3669135 ']' 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3669135 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.304 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3669135 00:05:26.560 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.560 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.560 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3669135' 00:05:26.560 killing process with pid 3669135 00:05:26.560 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3669135 00:05:26.560 23:32:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3669135 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3669144 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3669144 ']' 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3669144 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3669144 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3669144' 00:05:27.501 killing process with pid 3669144 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3669144 00:05:27.501 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3669144 00:05:27.757 00:05:27.757 real 0m3.267s 00:05:27.757 user 0m3.470s 00:05:27.757 sys 0m1.026s 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.757 ************************************ 00:05:27.757 END TEST locking_app_on_unlocked_coremask 00:05:27.757 ************************************ 00:05:27.757 23:32:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.757 23:32:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.757 23:32:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.757 23:32:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.757 23:32:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.757 ************************************ 00:05:27.757 START TEST locking_app_on_locked_coremask 00:05:27.757 ************************************ 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3669575 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3669575 /var/tmp/spdk.sock 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669575 ']' 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.757 23:32:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.757 [2024-07-15 23:32:02.807690] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:27.757 [2024-07-15 23:32:02.807779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669575 ] 00:05:27.757 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.757 [2024-07-15 23:32:02.864523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.013 [2024-07-15 23:32:02.974986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3669578 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3669578 /var/tmp/spdk2.sock 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3669578 /var/tmp/spdk2.sock 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3669578 /var/tmp/spdk2.sock 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669578 ']' 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.271 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.271 [2024-07-15 23:32:03.262025] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:28.271 [2024-07-15 23:32:03.262103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669578 ] 00:05:28.271 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.271 [2024-07-15 23:32:03.341280] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3669575 has claimed it. 00:05:28.271 [2024-07-15 23:32:03.341329] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3669578) - No such process 00:05:28.835 ERROR: process (pid: 3669578) is no longer running 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3669575 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3669575 00:05:28.835 23:32:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.399 lslocks: write error 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3669575 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3669575 ']' 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3669575 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3669575 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3669575' 00:05:29.399 killing process with pid 3669575 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3669575 00:05:29.399 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3669575 00:05:29.965 00:05:29.965 real 0m2.083s 00:05:29.965 user 0m2.257s 00:05:29.965 sys 0m0.651s 00:05:29.965 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.965 23:32:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.966 ************************************ 00:05:29.966 END TEST locking_app_on_locked_coremask 00:05:29.966 ************************************ 00:05:29.966 23:32:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.966 23:32:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:29.966 23:32:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.966 23:32:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.966 23:32:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.966 ************************************ 00:05:29.966 START TEST locking_overlapped_coremask 00:05:29.966 ************************************ 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3669872 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3669872 /var/tmp/spdk.sock 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669872 ']' 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.966 23:32:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.966 [2024-07-15 23:32:04.936334] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:29.966 [2024-07-15 23:32:04.936425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669872 ] 00:05:29.966 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.966 [2024-07-15 23:32:04.993903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.224 [2024-07-15 23:32:05.106386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.224 [2024-07-15 23:32:05.106453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.224 [2024-07-15 23:32:05.106456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.481 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.481 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3669878 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3669878 /var/tmp/spdk2.sock 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3669878 /var/tmp/spdk2.sock 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3669878 /var/tmp/spdk2.sock 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3669878 ']' 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.482 23:32:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.482 [2024-07-15 23:32:05.410206] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:30.482 [2024-07-15 23:32:05.410295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669878 ] 00:05:30.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.482 [2024-07-15 23:32:05.497279] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3669872 has claimed it. 00:05:30.482 [2024-07-15 23:32:05.497329] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3669878) - No such process 00:05:31.087 ERROR: process (pid: 3669878) is no longer running 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3669872 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3669872 ']' 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3669872 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3669872 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3669872' 00:05:31.087 killing process with pid 3669872 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3669872 00:05:31.087 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3669872 00:05:31.652 00:05:31.652 real 0m1.680s 00:05:31.652 user 0m4.466s 00:05:31.652 sys 0m0.436s 00:05:31.652 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.652 23:32:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.652 ************************************ 00:05:31.652 END TEST locking_overlapped_coremask 00:05:31.652 ************************************ 00:05:31.653 23:32:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.653 23:32:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:31.653 23:32:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.653 23:32:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.653 23:32:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.653 ************************************ 00:05:31.653 START TEST locking_overlapped_coremask_via_rpc 00:05:31.653 ************************************ 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3670042 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3670042 /var/tmp/spdk.sock 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3670042 ']' 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.653 23:32:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.653 [2024-07-15 23:32:06.668123] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:31.653 [2024-07-15 23:32:06.668209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670042 ] 00:05:31.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.653 [2024-07-15 23:32:06.723377] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.653 [2024-07-15 23:32:06.723415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.911 [2024-07-15 23:32:06.825836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.911 [2024-07-15 23:32:06.825945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.911 [2024-07-15 23:32:06.825948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.169 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.169 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.169 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3670102 00:05:32.169 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:32.169 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3670102 /var/tmp/spdk2.sock 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3670102 ']' 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.170 23:32:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.170 [2024-07-15 23:32:07.125481] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:32.170 [2024-07-15 23:32:07.125597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670102 ] 00:05:32.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.170 [2024-07-15 23:32:07.214023] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.170 [2024-07-15 23:32:07.214065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.428 [2024-07-15 23:32:07.441079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.428 [2024-07-15 23:32:07.445010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.428 [2024-07-15 23:32:07.445013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.994 [2024-07-15 23:32:08.085060] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3670042 has claimed it. 00:05:32.994 request: 00:05:32.994 { 00:05:32.994 "method": "framework_enable_cpumask_locks", 00:05:32.994 "req_id": 1 00:05:32.994 } 00:05:32.994 Got JSON-RPC error response 00:05:32.994 response: 00:05:32.994 { 00:05:32.994 "code": -32603, 00:05:32.994 "message": "Failed to claim CPU core: 2" 00:05:32.994 } 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3670042 /var/tmp/spdk.sock 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3670042 ']' 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.994 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3670102 /var/tmp/spdk2.sock 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3670102 ']' 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.253 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.511 00:05:33.511 real 0m1.995s 00:05:33.511 user 0m1.024s 00:05:33.511 sys 0m0.194s 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.511 23:32:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.511 ************************************ 00:05:33.511 END TEST locking_overlapped_coremask_via_rpc 00:05:33.511 ************************************ 00:05:33.511 23:32:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.511 23:32:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:33.511 23:32:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3670042 ]] 00:05:33.511 23:32:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3670042 00:05:33.511 23:32:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3670042 ']' 00:05:33.511 23:32:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3670042 00:05:33.511 23:32:08 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3670042 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3670042' 00:05:33.769 killing process with pid 3670042 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3670042 00:05:33.769 23:32:08 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3670042 00:05:34.028 23:32:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3670102 ]] 00:05:34.028 23:32:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3670102 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3670102 ']' 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3670102 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3670102 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3670102' 00:05:34.028 killing process with pid 3670102 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3670102 00:05:34.028 23:32:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3670102 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3670042 ]] 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3670042 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3670042 ']' 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3670042 00:05:34.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3670042) - No such process 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3670042 is not found' 00:05:34.595 Process with pid 3670042 is not found 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3670102 ]] 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3670102 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3670102 ']' 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3670102 00:05:34.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3670102) - No such process 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3670102 is not found' 00:05:34.595 Process with pid 3670102 is not found 00:05:34.595 23:32:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.595 00:05:34.595 real 0m15.781s 00:05:34.595 user 0m27.733s 00:05:34.595 sys 0m5.169s 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.595 23:32:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.595 ************************************ 00:05:34.595 END TEST cpu_locks 00:05:34.595 ************************************ 00:05:34.595 23:32:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.595 00:05:34.595 real 0m39.662s 00:05:34.595 user 1m15.516s 00:05:34.595 sys 0m9.181s 00:05:34.595 23:32:09 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.595 23:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.595 ************************************ 00:05:34.595 END TEST event 00:05:34.595 ************************************ 00:05:34.595 23:32:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.595 23:32:09 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:34.595 23:32:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.595 23:32:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.595 23:32:09 -- common/autotest_common.sh@10 -- # set +x 00:05:34.595 ************************************ 00:05:34.595 START TEST thread 00:05:34.595 ************************************ 00:05:34.595 23:32:09 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:34.595 * Looking for test storage... 00:05:34.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:34.595 23:32:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.595 23:32:09 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:34.595 23:32:09 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.595 23:32:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.854 ************************************ 00:05:34.854 START TEST thread_poller_perf 00:05:34.854 ************************************ 00:05:34.854 23:32:09 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.854 [2024-07-15 23:32:09.743137] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:34.854 [2024-07-15 23:32:09.743206] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670541 ] 00:05:34.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.854 [2024-07-15 23:32:09.800918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.854 [2024-07-15 23:32:09.902834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.854 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.229 ====================================== 00:05:36.229 busy:2707741887 (cyc) 00:05:36.229 total_run_count: 362000 00:05:36.229 tsc_hz: 2700000000 (cyc) 00:05:36.229 ====================================== 00:05:36.229 poller_cost: 7479 (cyc), 2770 (nsec) 00:05:36.229 00:05:36.229 real 0m1.289s 00:05:36.229 user 0m1.213s 00:05:36.229 sys 0m0.072s 00:05:36.229 23:32:11 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.229 23:32:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.229 ************************************ 00:05:36.229 END TEST thread_poller_perf 00:05:36.229 ************************************ 00:05:36.229 23:32:11 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:36.229 23:32:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.229 23:32:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:36.229 23:32:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.229 23:32:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.229 ************************************ 00:05:36.229 START TEST thread_poller_perf 00:05:36.229 ************************************ 00:05:36.229 23:32:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.229 [2024-07-15 23:32:11.079076] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:36.229 [2024-07-15 23:32:11.079134] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670702 ] 00:05:36.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.229 [2024-07-15 23:32:11.137527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.229 [2024-07-15 23:32:11.241495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.229 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.599 ====================================== 00:05:37.600 busy:2702271780 (cyc) 00:05:37.600 total_run_count: 4850000 00:05:37.600 tsc_hz: 2700000000 (cyc) 00:05:37.600 ====================================== 00:05:37.600 poller_cost: 557 (cyc), 206 (nsec) 00:05:37.600 00:05:37.600 real 0m1.286s 00:05:37.600 user 0m1.203s 00:05:37.600 sys 0m0.078s 00:05:37.600 23:32:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.600 23:32:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 ************************************ 00:05:37.600 END TEST thread_poller_perf 00:05:37.600 ************************************ 00:05:37.600 23:32:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.600 23:32:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:37.600 00:05:37.600 real 0m2.727s 00:05:37.600 user 0m2.468s 00:05:37.600 sys 0m0.260s 00:05:37.600 23:32:12 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.600 23:32:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 ************************************ 00:05:37.600 END TEST thread 00:05:37.600 ************************************ 00:05:37.600 23:32:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.600 23:32:12 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:37.600 23:32:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.600 23:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.600 23:32:12 -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 ************************************ 00:05:37.600 START TEST accel 00:05:37.600 ************************************ 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:37.600 * Looking for test storage... 00:05:37.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:37.600 23:32:12 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:37.600 23:32:12 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:37.600 23:32:12 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.600 23:32:12 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3670893 00:05:37.600 23:32:12 accel -- accel/accel.sh@63 -- # waitforlisten 3670893 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@829 -- # '[' -z 3670893 ']' 00:05:37.600 23:32:12 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.600 23:32:12 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.600 23:32:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.600 23:32:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.600 23:32:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.600 23:32:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 23:32:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.600 23:32:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.600 23:32:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:37.600 23:32:12 accel -- accel/accel.sh@41 -- # jq -r . 00:05:37.600 [2024-07-15 23:32:12.534461] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:37.600 [2024-07-15 23:32:12.534532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670893 ] 00:05:37.600 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.600 [2024-07-15 23:32:12.591933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.600 [2024-07-15 23:32:12.704334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.858 23:32:12 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.858 23:32:12 accel -- common/autotest_common.sh@862 -- # return 0 00:05:37.858 23:32:12 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:37.858 23:32:12 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:37.858 23:32:12 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:37.858 23:32:12 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:37.858 23:32:12 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:37.858 23:32:12 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:37.858 23:32:12 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.858 23:32:12 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:37.858 23:32:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.858 23:32:12 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:12 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:12 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.116 23:32:12 accel -- accel/accel.sh@72 -- # IFS== 00:05:38.116 23:32:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:38.116 23:32:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:38.116 23:32:13 accel -- accel/accel.sh@75 -- # killprocess 3670893 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@948 -- # '[' -z 3670893 ']' 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@952 -- # kill -0 3670893 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@953 -- # uname 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3670893 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3670893' 00:05:38.116 killing process with pid 3670893 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@967 -- # kill 3670893 00:05:38.116 23:32:13 accel -- common/autotest_common.sh@972 -- # wait 3670893 00:05:38.375 23:32:13 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:38.375 23:32:13 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:38.375 23:32:13 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:38.375 23:32:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.375 23:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.375 23:32:13 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:38.375 23:32:13 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:38.632 23:32:13 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.632 23:32:13 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:38.632 23:32:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.632 23:32:13 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:38.632 23:32:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:38.633 23:32:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.633 23:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.633 ************************************ 00:05:38.633 START TEST accel_missing_filename 00:05:38.633 ************************************ 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.633 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:38.633 23:32:13 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:38.633 [2024-07-15 23:32:13.574609] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:38.633 [2024-07-15 23:32:13.574670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671063 ] 00:05:38.633 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.633 [2024-07-15 23:32:13.633470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.633 [2024-07-15 23:32:13.737345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.890 [2024-07-15 23:32:13.799047] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.890 [2024-07-15 23:32:13.874821] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:38.890 A filename is required. 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.890 00:05:38.890 real 0m0.428s 00:05:38.890 user 0m0.323s 00:05:38.890 sys 0m0.138s 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.890 23:32:13 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:38.890 ************************************ 00:05:38.890 END TEST accel_missing_filename 00:05:38.890 ************************************ 00:05:38.890 23:32:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.890 23:32:14 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.890 23:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:38.890 23:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.890 23:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.148 ************************************ 00:05:39.148 START TEST accel_compress_verify 00:05:39.148 ************************************ 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.148 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:39.148 23:32:14 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:39.148 [2024-07-15 23:32:14.051223] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:39.148 [2024-07-15 23:32:14.051295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671215 ] 00:05:39.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.148 [2024-07-15 23:32:14.108758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.148 [2024-07-15 23:32:14.212592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.148 [2024-07-15 23:32:14.267674] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.408 [2024-07-15 23:32:14.350446] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:39.408 00:05:39.408 Compression does not support the verify option, aborting. 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.408 00:05:39.408 real 0m0.429s 00:05:39.408 user 0m0.337s 00:05:39.408 sys 0m0.129s 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.408 23:32:14 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:39.408 ************************************ 00:05:39.408 END TEST accel_compress_verify 00:05:39.408 ************************************ 00:05:39.408 23:32:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.408 23:32:14 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:39.408 23:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.408 23:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.408 23:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.408 ************************************ 00:05:39.408 START TEST accel_wrong_workload 00:05:39.408 ************************************ 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.408 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:39.408 23:32:14 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:39.408 Unsupported workload type: foobar 00:05:39.408 [2024-07-15 23:32:14.528372] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:39.668 accel_perf options: 00:05:39.668 [-h help message] 00:05:39.668 [-q queue depth per core] 00:05:39.668 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.668 [-T number of threads per core 00:05:39.668 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.668 [-t time in seconds] 00:05:39.668 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.668 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:39.668 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.668 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.668 [-S for crc32c workload, use this seed value (default 0) 00:05:39.668 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.668 [-f for fill workload, use this BYTE value (default 255) 00:05:39.668 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.668 [-y verify result if this switch is on] 00:05:39.668 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.668 Can be used to spread operations across a wider range of memory. 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.668 00:05:39.668 real 0m0.024s 00:05:39.668 user 0m0.015s 00:05:39.668 sys 0m0.009s 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.668 23:32:14 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:39.668 ************************************ 00:05:39.668 END TEST accel_wrong_workload 00:05:39.668 ************************************ 00:05:39.668 Error: writing output failed: Broken pipe 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.668 23:32:14 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.668 ************************************ 00:05:39.668 START TEST accel_negative_buffers 00:05:39.668 ************************************ 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:39.668 23:32:14 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:39.668 -x option must be non-negative. 00:05:39.668 [2024-07-15 23:32:14.600591] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:39.668 accel_perf options: 00:05:39.668 [-h help message] 00:05:39.668 [-q queue depth per core] 00:05:39.668 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.668 [-T number of threads per core 00:05:39.668 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.668 [-t time in seconds] 00:05:39.668 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.668 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:39.668 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.668 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.668 [-S for crc32c workload, use this seed value (default 0) 00:05:39.668 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.668 [-f for fill workload, use this BYTE value (default 255) 00:05:39.668 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.668 [-y verify result if this switch is on] 00:05:39.668 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.668 Can be used to spread operations across a wider range of memory. 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.668 00:05:39.668 real 0m0.024s 00:05:39.668 user 0m0.016s 00:05:39.668 sys 0m0.009s 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.668 23:32:14 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:39.668 ************************************ 00:05:39.668 END TEST accel_negative_buffers 00:05:39.668 ************************************ 00:05:39.668 Error: writing output failed: Broken pipe 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.668 23:32:14 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.668 23:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.668 ************************************ 00:05:39.668 START TEST accel_crc32c 00:05:39.668 ************************************ 00:05:39.668 23:32:14 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:39.668 23:32:14 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:39.668 [2024-07-15 23:32:14.669310] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:39.668 [2024-07-15 23:32:14.669374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671274 ] 00:05:39.668 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.668 [2024-07-15 23:32:14.728213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.927 [2024-07-15 23:32:14.833696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.927 23:32:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.301 23:32:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.301 00:05:41.301 real 0m1.438s 00:05:41.301 user 0m1.303s 00:05:41.301 sys 0m0.138s 00:05:41.301 23:32:16 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.301 23:32:16 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:41.301 ************************************ 00:05:41.301 END TEST accel_crc32c 00:05:41.301 ************************************ 00:05:41.301 23:32:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.301 23:32:16 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:41.301 23:32:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.301 23:32:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.301 23:32:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.301 ************************************ 00:05:41.301 START TEST accel_crc32c_C2 00:05:41.301 ************************************ 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:41.301 [2024-07-15 23:32:16.151106] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:41.301 [2024-07-15 23:32:16.151164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671548 ] 00:05:41.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.301 [2024-07-15 23:32:16.208745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.301 [2024-07-15 23:32:16.310574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.301 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.302 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.302 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.302 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.302 23:32:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.676 00:05:42.676 real 0m1.416s 00:05:42.676 user 0m1.286s 00:05:42.676 sys 0m0.132s 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.676 23:32:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:42.676 ************************************ 00:05:42.676 END TEST accel_crc32c_C2 00:05:42.676 ************************************ 00:05:42.676 23:32:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.676 23:32:17 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:42.676 23:32:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.676 23:32:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.676 23:32:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.676 ************************************ 00:05:42.676 START TEST accel_copy 00:05:42.676 ************************************ 00:05:42.676 23:32:17 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:42.676 23:32:17 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:42.676 [2024-07-15 23:32:17.620623] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:42.676 [2024-07-15 23:32:17.620688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671709 ] 00:05:42.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.676 [2024-07-15 23:32:17.678574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.676 [2024-07-15 23:32:17.779491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.935 23:32:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:44.310 23:32:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.310 00:05:44.310 real 0m1.427s 00:05:44.310 user 0m1.289s 00:05:44.310 sys 0m0.139s 00:05:44.310 23:32:19 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.310 23:32:19 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 ************************************ 00:05:44.310 END TEST accel_copy 00:05:44.310 ************************************ 00:05:44.310 23:32:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.310 23:32:19 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.310 23:32:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:44.310 23:32:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.310 23:32:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.310 ************************************ 00:05:44.310 START TEST accel_fill 00:05:44.310 ************************************ 00:05:44.310 23:32:19 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:44.310 [2024-07-15 23:32:19.096313] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:44.310 [2024-07-15 23:32:19.096377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671866 ] 00:05:44.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.310 [2024-07-15 23:32:19.154560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.310 [2024-07-15 23:32:19.258393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.310 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.311 23:32:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.683 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:45.684 23:32:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.684 00:05:45.684 real 0m1.432s 00:05:45.684 user 0m1.298s 00:05:45.684 sys 0m0.136s 00:05:45.684 23:32:20 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.684 23:32:20 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:45.684 ************************************ 00:05:45.684 END TEST accel_fill 00:05:45.684 ************************************ 00:05:45.684 23:32:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.684 23:32:20 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:45.684 23:32:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:45.684 23:32:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.684 23:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.684 ************************************ 00:05:45.684 START TEST accel_copy_crc32c 00:05:45.684 ************************************ 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:45.684 [2024-07-15 23:32:20.578263] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:45.684 [2024-07-15 23:32:20.578333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672034 ] 00:05:45.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.684 [2024-07-15 23:32:20.635683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.684 [2024-07-15 23:32:20.747393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.684 23:32:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.053 00:05:47.053 real 0m1.423s 00:05:47.053 user 0m1.299s 00:05:47.053 sys 0m0.126s 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.053 23:32:21 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:47.053 ************************************ 00:05:47.053 END TEST accel_copy_crc32c 00:05:47.053 ************************************ 00:05:47.053 23:32:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.053 23:32:22 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.053 23:32:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:47.053 23:32:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.053 23:32:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.053 ************************************ 00:05:47.053 START TEST accel_copy_crc32c_C2 00:05:47.053 ************************************ 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:47.053 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:47.053 [2024-07-15 23:32:22.048758] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:47.053 [2024-07-15 23:32:22.048815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672301 ] 00:05:47.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.053 [2024-07-15 23:32:22.104464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.310 [2024-07-15 23:32:22.209299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.310 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 23:32:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.678 00:05:48.678 real 0m1.428s 00:05:48.678 user 0m1.302s 00:05:48.678 sys 0m0.129s 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.678 23:32:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:48.678 ************************************ 00:05:48.678 END TEST accel_copy_crc32c_C2 00:05:48.678 ************************************ 00:05:48.678 23:32:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.678 23:32:23 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:48.678 23:32:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:48.678 23:32:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.678 23:32:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.678 ************************************ 00:05:48.678 START TEST accel_dualcast 00:05:48.678 ************************************ 00:05:48.678 23:32:23 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:48.678 [2024-07-15 23:32:23.526796] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:48.678 [2024-07-15 23:32:23.526859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672454 ] 00:05:48.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.678 [2024-07-15 23:32:23.582909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.678 [2024-07-15 23:32:23.685772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.678 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.679 23:32:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:50.086 23:32:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.086 00:05:50.086 real 0m1.429s 00:05:50.086 user 0m1.296s 00:05:50.086 sys 0m0.135s 00:05:50.086 23:32:24 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.086 23:32:24 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:50.086 ************************************ 00:05:50.086 END TEST accel_dualcast 00:05:50.086 ************************************ 00:05:50.086 23:32:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.086 23:32:24 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:50.086 23:32:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.086 23:32:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.086 23:32:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.086 ************************************ 00:05:50.086 START TEST accel_compare 00:05:50.086 ************************************ 00:05:50.086 23:32:24 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:50.086 23:32:24 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:50.086 [2024-07-15 23:32:25.003567] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:50.086 [2024-07-15 23:32:25.003627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672613 ] 00:05:50.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.086 [2024-07-15 23:32:25.061976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.086 [2024-07-15 23:32:25.180402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.346 23:32:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:51.722 23:32:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.722 00:05:51.722 real 0m1.447s 00:05:51.722 user 0m1.308s 00:05:51.722 sys 0m0.140s 00:05:51.722 23:32:26 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.722 23:32:26 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:51.722 ************************************ 00:05:51.722 END TEST accel_compare 00:05:51.722 ************************************ 00:05:51.722 23:32:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.722 23:32:26 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:51.722 23:32:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.722 23:32:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.722 23:32:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.722 ************************************ 00:05:51.722 START TEST accel_xor 00:05:51.722 ************************************ 00:05:51.722 23:32:26 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:51.722 [2024-07-15 23:32:26.488315] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:51.722 [2024-07-15 23:32:26.488374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672889 ] 00:05:51.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.722 [2024-07-15 23:32:26.545648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.722 [2024-07-15 23:32:26.652443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:51.722 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.723 23:32:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.098 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.099 00:05:53.099 real 0m1.430s 00:05:53.099 user 0m1.297s 00:05:53.099 sys 0m0.135s 00:05:53.099 23:32:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.099 23:32:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:53.099 ************************************ 00:05:53.099 END TEST accel_xor 00:05:53.099 ************************************ 00:05:53.099 23:32:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.099 23:32:27 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:53.099 23:32:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.099 23:32:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.099 23:32:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.099 ************************************ 00:05:53.099 START TEST accel_xor 00:05:53.099 ************************************ 00:05:53.099 23:32:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:53.099 23:32:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:53.099 [2024-07-15 23:32:27.971748] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:53.099 [2024-07-15 23:32:27.971809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673042 ] 00:05:53.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.099 [2024-07-15 23:32:28.028243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.099 [2024-07-15 23:32:28.132634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.099 23:32:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.472 23:32:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.472 00:05:54.472 real 0m1.433s 00:05:54.472 user 0m1.297s 00:05:54.472 sys 0m0.138s 00:05:54.472 23:32:29 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.472 23:32:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:54.472 ************************************ 00:05:54.472 END TEST accel_xor 00:05:54.472 ************************************ 00:05:54.472 23:32:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.472 23:32:29 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:54.472 23:32:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:54.472 23:32:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.472 23:32:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.472 ************************************ 00:05:54.472 START TEST accel_dif_verify 00:05:54.472 ************************************ 00:05:54.472 23:32:29 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:54.472 23:32:29 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:54.472 [2024-07-15 23:32:29.457937] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:54.472 [2024-07-15 23:32:29.458046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673201 ] 00:05:54.472 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.472 [2024-07-15 23:32:29.514072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.731 [2024-07-15 23:32:29.621867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.731 23:32:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:56.102 23:32:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.102 00:05:56.102 real 0m1.430s 00:05:56.102 user 0m1.309s 00:05:56.102 sys 0m0.125s 00:05:56.102 23:32:30 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.102 23:32:30 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 ************************************ 00:05:56.102 END TEST accel_dif_verify 00:05:56.102 ************************************ 00:05:56.102 23:32:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.102 23:32:30 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:56.102 23:32:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:56.102 23:32:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.102 23:32:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 ************************************ 00:05:56.102 START TEST accel_dif_generate 00:05:56.102 ************************************ 00:05:56.102 23:32:30 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:56.102 23:32:30 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:56.102 [2024-07-15 23:32:30.929603] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:56.102 [2024-07-15 23:32:30.929668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673469 ] 00:05:56.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.102 [2024-07-15 23:32:30.986066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.102 [2024-07-15 23:32:31.095611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.102 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.103 23:32:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.472 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:57.473 23:32:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.473 00:05:57.473 real 0m1.426s 00:05:57.473 user 0m1.309s 00:05:57.473 sys 0m0.120s 00:05:57.473 23:32:32 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.473 23:32:32 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:57.473 ************************************ 00:05:57.473 END TEST accel_dif_generate 00:05:57.473 ************************************ 00:05:57.473 23:32:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.473 23:32:32 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:57.473 23:32:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:57.473 23:32:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.473 23:32:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.473 ************************************ 00:05:57.473 START TEST accel_dif_generate_copy 00:05:57.473 ************************************ 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:57.473 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:57.473 [2024-07-15 23:32:32.406091] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:57.473 [2024-07-15 23:32:32.406152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673637 ] 00:05:57.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.473 [2024-07-15 23:32:32.463697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.473 [2024-07-15 23:32:32.567446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.731 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.732 23:32:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.101 00:05:59.101 real 0m1.433s 00:05:59.101 user 0m1.305s 00:05:59.101 sys 0m0.130s 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.101 23:32:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.101 ************************************ 00:05:59.101 END TEST accel_dif_generate_copy 00:05:59.101 ************************************ 00:05:59.101 23:32:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.101 23:32:33 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:59.101 23:32:33 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.102 23:32:33 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.102 23:32:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.102 23:32:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.102 ************************************ 00:05:59.102 START TEST accel_comp 00:05:59.102 ************************************ 00:05:59.102 23:32:33 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:59.102 23:32:33 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:59.102 [2024-07-15 23:32:33.889270] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:05:59.102 [2024-07-15 23:32:33.889333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673788 ] 00:05:59.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.102 [2024-07-15 23:32:33.948944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.102 [2024-07-15 23:32:34.052785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.102 23:32:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:00.472 23:32:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.472 00:06:00.472 real 0m1.438s 00:06:00.472 user 0m1.308s 00:06:00.472 sys 0m0.132s 00:06:00.472 23:32:35 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.472 23:32:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:00.472 ************************************ 00:06:00.472 END TEST accel_comp 00:06:00.472 ************************************ 00:06:00.472 23:32:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.472 23:32:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.472 23:32:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:00.472 23:32:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.472 23:32:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.472 ************************************ 00:06:00.472 START TEST accel_decomp 00:06:00.472 ************************************ 00:06:00.472 23:32:35 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:00.472 23:32:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:00.472 [2024-07-15 23:32:35.370717] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:00.472 [2024-07-15 23:32:35.370785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674028 ] 00:06:00.472 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.472 [2024-07-15 23:32:35.428262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.472 [2024-07-15 23:32:35.541117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.729 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.730 23:32:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.099 23:32:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.099 00:06:02.099 real 0m1.436s 00:06:02.099 user 0m1.305s 00:06:02.099 sys 0m0.133s 00:06:02.099 23:32:36 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.099 23:32:36 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:02.099 ************************************ 00:06:02.099 END TEST accel_decomp 00:06:02.099 ************************************ 00:06:02.099 23:32:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.099 23:32:36 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.099 23:32:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.099 23:32:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.099 23:32:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.099 ************************************ 00:06:02.099 START TEST accel_decomp_full 00:06:02.099 ************************************ 00:06:02.100 23:32:36 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:02.100 23:32:36 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:02.100 [2024-07-15 23:32:36.856165] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:02.100 [2024-07-15 23:32:36.856223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674222 ] 00:06:02.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.100 [2024-07-15 23:32:36.914267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.100 [2024-07-15 23:32:37.023997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.100 23:32:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.469 23:32:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.469 00:06:03.469 real 0m1.448s 00:06:03.469 user 0m1.317s 00:06:03.469 sys 0m0.133s 00:06:03.469 23:32:38 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.469 23:32:38 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:03.469 ************************************ 00:06:03.469 END TEST accel_decomp_full 00:06:03.469 ************************************ 00:06:03.469 23:32:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.469 23:32:38 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.469 23:32:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:03.469 23:32:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.469 23:32:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.469 ************************************ 00:06:03.469 START TEST accel_decomp_mcore 00:06:03.469 ************************************ 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:03.469 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:03.470 [2024-07-15 23:32:38.355159] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:03.470 [2024-07-15 23:32:38.355223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674382 ] 00:06:03.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.470 [2024-07-15 23:32:38.412807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.470 [2024-07-15 23:32:38.517850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.470 [2024-07-15 23:32:38.518025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.470 [2024-07-15 23:32:38.517950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.470 [2024-07-15 23:32:38.518028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.470 23:32:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.841 00:06:04.841 real 0m1.451s 00:06:04.841 user 0m4.754s 00:06:04.841 sys 0m0.147s 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.841 23:32:39 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:04.841 ************************************ 00:06:04.841 END TEST accel_decomp_mcore 00:06:04.841 ************************************ 00:06:04.841 23:32:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.841 23:32:39 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.841 23:32:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:04.841 23:32:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.841 23:32:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.841 ************************************ 00:06:04.841 START TEST accel_decomp_full_mcore 00:06:04.841 ************************************ 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.841 23:32:39 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.841 [2024-07-15 23:32:39.853822] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:04.841 [2024-07-15 23:32:39.853885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674543 ] 00:06:04.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.841 [2024-07-15 23:32:39.911965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.099 [2024-07-15 23:32:40.027071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.099 [2024-07-15 23:32:40.027136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.099 [2024-07-15 23:32:40.027203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.099 [2024-07-15 23:32:40.027206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.099 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.100 23:32:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.471 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.472 00:06:06.472 real 0m1.466s 00:06:06.472 user 0m4.807s 00:06:06.472 sys 0m0.139s 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.472 23:32:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.472 ************************************ 00:06:06.472 END TEST accel_decomp_full_mcore 00:06:06.472 ************************************ 00:06:06.472 23:32:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.472 23:32:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.472 23:32:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.472 23:32:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.472 23:32:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.472 ************************************ 00:06:06.472 START TEST accel_decomp_mthread 00:06:06.472 ************************************ 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:06.472 [2024-07-15 23:32:41.368280] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:06.472 [2024-07-15 23:32:41.368346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674819 ] 00:06:06.472 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.472 [2024-07-15 23:32:41.427665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.472 [2024-07-15 23:32:41.526201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.472 23:32:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.844 23:32:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.845 00:06:07.845 real 0m1.421s 00:06:07.845 user 0m1.295s 00:06:07.845 sys 0m0.128s 00:06:07.845 23:32:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.845 23:32:42 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:07.845 ************************************ 00:06:07.845 END TEST accel_decomp_mthread 00:06:07.845 ************************************ 00:06:07.845 23:32:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.845 23:32:42 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.845 23:32:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:07.845 23:32:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.845 23:32:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.845 ************************************ 00:06:07.845 START TEST accel_decomp_full_mthread 00:06:07.845 ************************************ 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.845 23:32:42 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.845 [2024-07-15 23:32:42.834519] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:07.845 [2024-07-15 23:32:42.834580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674975 ] 00:06:07.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.845 [2024-07-15 23:32:42.892218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.104 [2024-07-15 23:32:42.997629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.104 23:32:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.474 00:06:09.474 real 0m1.459s 00:06:09.474 user 0m1.327s 00:06:09.474 sys 0m0.134s 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.474 23:32:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.474 ************************************ 00:06:09.474 END TEST accel_decomp_full_mthread 00:06:09.474 ************************************ 00:06:09.474 23:32:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.474 23:32:44 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:09.474 23:32:44 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.474 23:32:44 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:09.474 23:32:44 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.474 23:32:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.474 23:32:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.474 23:32:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.474 23:32:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.474 23:32:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.474 23:32:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.474 23:32:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.474 23:32:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.474 23:32:44 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.474 ************************************ 00:06:09.474 START TEST accel_dif_functional_tests 00:06:09.474 ************************************ 00:06:09.474 23:32:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.474 [2024-07-15 23:32:44.366001] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:09.474 [2024-07-15 23:32:44.366063] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675135 ] 00:06:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.474 [2024-07-15 23:32:44.421526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.474 [2024-07-15 23:32:44.526617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.474 [2024-07-15 23:32:44.526679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.474 [2024-07-15 23:32:44.526682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.732 00:06:09.732 00:06:09.732 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.732 http://cunit.sourceforge.net/ 00:06:09.732 00:06:09.732 00:06:09.732 Suite: accel_dif 00:06:09.732 Test: verify: DIF generated, GUARD check ...passed 00:06:09.732 Test: verify: DIF generated, APPTAG check ...passed 00:06:09.732 Test: verify: DIF generated, REFTAG check ...passed 00:06:09.732 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:32:44.622733] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:09.732 passed 00:06:09.732 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:32:44.622818] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:09.732 passed 00:06:09.732 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:32:44.622851] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:09.732 passed 00:06:09.732 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:09.732 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:32:44.622922] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:09.732 passed 00:06:09.732 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:09.732 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:09.732 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:09.732 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:32:44.623083] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:09.732 passed 00:06:09.732 Test: verify copy: DIF generated, GUARD check ...passed 00:06:09.732 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:09.732 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:09.732 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:32:44.623259] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:09.732 passed 00:06:09.732 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:32:44.623296] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:09.732 passed 00:06:09.732 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:32:44.623329] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:09.732 passed 00:06:09.732 Test: generate copy: DIF generated, GUARD check ...passed 00:06:09.732 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:09.732 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:09.732 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:09.732 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:09.732 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:09.732 Test: generate copy: iovecs-len validate ...[2024-07-15 23:32:44.623546] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:09.732 passed 00:06:09.732 Test: generate copy: buffer alignment validate ...passed 00:06:09.732 00:06:09.732 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.732 suites 1 1 n/a 0 0 00:06:09.732 tests 26 26 26 0 0 00:06:09.732 asserts 115 115 115 0 n/a 00:06:09.732 00:06:09.732 Elapsed time = 0.003 seconds 00:06:09.996 00:06:09.996 real 0m0.532s 00:06:09.996 user 0m0.810s 00:06:09.996 sys 0m0.174s 00:06:09.996 23:32:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.996 23:32:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:09.996 ************************************ 00:06:09.996 END TEST accel_dif_functional_tests 00:06:09.996 ************************************ 00:06:09.996 23:32:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.996 00:06:09.996 real 0m32.455s 00:06:09.996 user 0m36.122s 00:06:09.996 sys 0m4.319s 00:06:09.996 23:32:44 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.996 23:32:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.996 ************************************ 00:06:09.996 END TEST accel 00:06:09.996 ************************************ 00:06:09.996 23:32:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.996 23:32:44 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:09.996 23:32:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.996 23:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.996 23:32:44 -- common/autotest_common.sh@10 -- # set +x 00:06:09.996 ************************************ 00:06:09.996 START TEST accel_rpc 00:06:09.996 ************************************ 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:09.996 * Looking for test storage... 00:06:09.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:09.996 23:32:44 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.996 23:32:44 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3675320 00:06:09.996 23:32:44 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:09.996 23:32:44 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3675320 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3675320 ']' 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.996 23:32:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.996 [2024-07-15 23:32:45.022312] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:09.996 [2024-07-15 23:32:45.022412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675320 ] 00:06:09.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.996 [2024-07-15 23:32:45.078498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.294 [2024-07-15 23:32:45.186566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.294 23:32:45 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.294 23:32:45 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.294 23:32:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:10.294 23:32:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:10.294 23:32:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:10.294 23:32:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:10.294 23:32:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:10.294 23:32:45 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.294 23:32:45 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.294 23:32:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.294 ************************************ 00:06:10.294 START TEST accel_assign_opcode 00:06:10.294 ************************************ 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.294 [2024-07-15 23:32:45.291272] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.294 [2024-07-15 23:32:45.299281] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.294 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.553 software 00:06:10.553 00:06:10.553 real 0m0.286s 00:06:10.553 user 0m0.042s 00:06:10.553 sys 0m0.006s 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.553 23:32:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 ************************************ 00:06:10.553 END TEST accel_assign_opcode 00:06:10.553 ************************************ 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.553 23:32:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3675320 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3675320 ']' 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3675320 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3675320 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3675320' 00:06:10.553 killing process with pid 3675320 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@967 -- # kill 3675320 00:06:10.553 23:32:45 accel_rpc -- common/autotest_common.sh@972 -- # wait 3675320 00:06:11.119 00:06:11.119 real 0m1.119s 00:06:11.119 user 0m1.116s 00:06:11.119 sys 0m0.401s 00:06:11.119 23:32:46 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.119 23:32:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.119 ************************************ 00:06:11.119 END TEST accel_rpc 00:06:11.119 ************************************ 00:06:11.119 23:32:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.119 23:32:46 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.119 23:32:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.119 23:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.119 23:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:11.119 ************************************ 00:06:11.119 START TEST app_cmdline 00:06:11.119 ************************************ 00:06:11.119 23:32:46 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.119 * Looking for test storage... 00:06:11.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.120 23:32:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.120 23:32:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3675526 00:06:11.120 23:32:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.120 23:32:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3675526 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3675526 ']' 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.120 23:32:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.120 [2024-07-15 23:32:46.197836] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:11.120 [2024-07-15 23:32:46.197935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675526 ] 00:06:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.377 [2024-07-15 23:32:46.257636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.378 [2024-07-15 23:32:46.362088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.635 23:32:46 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.635 23:32:46 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:11.635 23:32:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.892 { 00:06:11.892 "version": "SPDK v24.09-pre git sha1 1053f1b13", 00:06:11.892 "fields": { 00:06:11.892 "major": 24, 00:06:11.892 "minor": 9, 00:06:11.892 "patch": 0, 00:06:11.892 "suffix": "-pre", 00:06:11.892 "commit": "1053f1b13" 00:06:11.892 } 00:06:11.892 } 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.892 23:32:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.892 23:32:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.893 23:32:46 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.150 request: 00:06:12.150 { 00:06:12.150 "method": "env_dpdk_get_mem_stats", 00:06:12.150 "req_id": 1 00:06:12.150 } 00:06:12.150 Got JSON-RPC error response 00:06:12.150 response: 00:06:12.150 { 00:06:12.150 "code": -32601, 00:06:12.150 "message": "Method not found" 00:06:12.150 } 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.150 23:32:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3675526 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3675526 ']' 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3675526 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3675526 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3675526' 00:06:12.150 killing process with pid 3675526 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@967 -- # kill 3675526 00:06:12.150 23:32:47 app_cmdline -- common/autotest_common.sh@972 -- # wait 3675526 00:06:12.716 00:06:12.716 real 0m1.510s 00:06:12.716 user 0m1.821s 00:06:12.716 sys 0m0.467s 00:06:12.716 23:32:47 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.716 23:32:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 END TEST app_cmdline 00:06:12.716 ************************************ 00:06:12.716 23:32:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.716 23:32:47 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.716 23:32:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.716 23:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.716 23:32:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 START TEST version 00:06:12.716 ************************************ 00:06:12.716 23:32:47 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.716 * Looking for test storage... 00:06:12.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.716 23:32:47 version -- app/version.sh@17 -- # get_header_version major 00:06:12.716 23:32:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # cut -f2 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.716 23:32:47 version -- app/version.sh@17 -- # major=24 00:06:12.716 23:32:47 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.716 23:32:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # cut -f2 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.716 23:32:47 version -- app/version.sh@18 -- # minor=9 00:06:12.716 23:32:47 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.716 23:32:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # cut -f2 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.716 23:32:47 version -- app/version.sh@19 -- # patch=0 00:06:12.716 23:32:47 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.716 23:32:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # cut -f2 00:06:12.716 23:32:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.716 23:32:47 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.716 23:32:47 version -- app/version.sh@22 -- # version=24.9 00:06:12.716 23:32:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.716 23:32:47 version -- app/version.sh@28 -- # version=24.9rc0 00:06:12.716 23:32:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:12.716 23:32:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.716 23:32:47 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:12.716 23:32:47 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:12.716 00:06:12.716 real 0m0.114s 00:06:12.716 user 0m0.055s 00:06:12.716 sys 0m0.082s 00:06:12.716 23:32:47 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.716 23:32:47 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 END TEST version 00:06:12.716 ************************************ 00:06:12.716 23:32:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.716 23:32:47 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@198 -- # uname -s 00:06:12.716 23:32:47 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:12.716 23:32:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:12.716 23:32:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:12.716 23:32:47 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:12.716 23:32:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.716 23:32:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 23:32:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:12.716 23:32:47 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:12.716 23:32:47 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.716 23:32:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:12.716 23:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.716 23:32:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 START TEST nvmf_tcp 00:06:12.716 ************************************ 00:06:12.716 23:32:47 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.975 * Looking for test storage... 00:06:12.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.975 23:32:47 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.975 23:32:47 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.975 23:32:47 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.975 23:32:47 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.975 23:32:47 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.975 23:32:47 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.975 23:32:47 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:12.975 23:32:47 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:12.975 23:32:47 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.975 23:32:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:12.975 23:32:47 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:12.975 23:32:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:12.975 23:32:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.975 23:32:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.975 ************************************ 00:06:12.975 START TEST nvmf_example 00:06:12.975 ************************************ 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:12.975 * Looking for test storage... 00:06:12.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.975 23:32:47 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:12.976 23:32:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:15.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:15.508 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:15.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:15.509 Found net devices under 0000:09:00.0: cvl_0_0 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:15.509 Found net devices under 0000:09:00.1: cvl_0_1 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:15.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:06:15.509 00:06:15.509 --- 10.0.0.2 ping statistics --- 00:06:15.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.509 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:06:15.509 00:06:15.509 --- 10.0.0.1 ping statistics --- 00:06:15.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.509 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3677546 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3677546 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3677546 ']' 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.509 23:32:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.443 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:16.444 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:16.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.642 Initializing NVMe Controllers 00:06:28.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:28.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:28.642 Initialization complete. Launching workers. 00:06:28.642 ======================================================== 00:06:28.642 Latency(us) 00:06:28.642 Device Information : IOPS MiB/s Average min max 00:06:28.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14950.57 58.40 4280.82 884.62 16538.65 00:06:28.642 ======================================================== 00:06:28.642 Total : 14950.57 58.40 4280.82 884.62 16538.65 00:06:28.642 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:28.642 rmmod nvme_tcp 00:06:28.642 rmmod nvme_fabrics 00:06:28.642 rmmod nvme_keyring 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3677546 ']' 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3677546 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3677546 ']' 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3677546 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3677546 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:28.642 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3677546' 00:06:28.643 killing process with pid 3677546 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3677546 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3677546 00:06:28.643 nvmf threads initialize successfully 00:06:28.643 bdev subsystem init successfully 00:06:28.643 created a nvmf target service 00:06:28.643 create targets's poll groups done 00:06:28.643 all subsystems of target started 00:06:28.643 nvmf target is running 00:06:28.643 all subsystems of target stopped 00:06:28.643 destroy targets's poll groups done 00:06:28.643 destroyed the nvmf target service 00:06:28.643 bdev subsystem finish successfully 00:06:28.643 nvmf threads destroy successfully 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.643 23:33:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.213 00:06:29.213 real 0m16.124s 00:06:29.213 user 0m45.291s 00:06:29.213 sys 0m3.454s 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.213 23:33:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.213 ************************************ 00:06:29.213 END TEST nvmf_example 00:06:29.213 ************************************ 00:06:29.213 23:33:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:29.213 23:33:04 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:29.213 23:33:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.213 23:33:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.213 23:33:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.213 ************************************ 00:06:29.213 START TEST nvmf_filesystem 00:06:29.213 ************************************ 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:29.213 * Looking for test storage... 00:06:29.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:29.213 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:29.214 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:29.214 #define SPDK_CONFIG_H 00:06:29.214 #define SPDK_CONFIG_APPS 1 00:06:29.214 #define SPDK_CONFIG_ARCH native 00:06:29.214 #undef SPDK_CONFIG_ASAN 00:06:29.214 #undef SPDK_CONFIG_AVAHI 00:06:29.214 #undef SPDK_CONFIG_CET 00:06:29.214 #define SPDK_CONFIG_COVERAGE 1 00:06:29.214 #define SPDK_CONFIG_CROSS_PREFIX 00:06:29.214 #undef SPDK_CONFIG_CRYPTO 00:06:29.214 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:29.214 #undef SPDK_CONFIG_CUSTOMOCF 00:06:29.214 #undef SPDK_CONFIG_DAOS 00:06:29.214 #define SPDK_CONFIG_DAOS_DIR 00:06:29.214 #define SPDK_CONFIG_DEBUG 1 00:06:29.214 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:29.214 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:29.214 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:29.214 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:29.214 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:29.214 #undef SPDK_CONFIG_DPDK_UADK 00:06:29.214 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:29.214 #define SPDK_CONFIG_EXAMPLES 1 00:06:29.214 #undef SPDK_CONFIG_FC 00:06:29.214 #define SPDK_CONFIG_FC_PATH 00:06:29.214 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:29.214 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:29.214 #undef SPDK_CONFIG_FUSE 00:06:29.214 #undef SPDK_CONFIG_FUZZER 00:06:29.214 #define SPDK_CONFIG_FUZZER_LIB 00:06:29.214 #undef SPDK_CONFIG_GOLANG 00:06:29.214 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:29.214 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:29.214 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:29.214 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:29.214 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:29.214 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:29.214 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:29.214 #define SPDK_CONFIG_IDXD 1 00:06:29.214 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:29.214 #undef SPDK_CONFIG_IPSEC_MB 00:06:29.214 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:29.214 #define SPDK_CONFIG_ISAL 1 00:06:29.214 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:29.214 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:29.214 #define SPDK_CONFIG_LIBDIR 00:06:29.214 #undef SPDK_CONFIG_LTO 00:06:29.215 #define SPDK_CONFIG_MAX_LCORES 128 00:06:29.215 #define SPDK_CONFIG_NVME_CUSE 1 00:06:29.215 #undef SPDK_CONFIG_OCF 00:06:29.215 #define SPDK_CONFIG_OCF_PATH 00:06:29.215 #define SPDK_CONFIG_OPENSSL_PATH 00:06:29.215 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:29.215 #define SPDK_CONFIG_PGO_DIR 00:06:29.215 #undef SPDK_CONFIG_PGO_USE 00:06:29.215 #define SPDK_CONFIG_PREFIX /usr/local 00:06:29.215 #undef SPDK_CONFIG_RAID5F 00:06:29.215 #undef SPDK_CONFIG_RBD 00:06:29.215 #define SPDK_CONFIG_RDMA 1 00:06:29.215 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:29.215 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:29.215 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:29.215 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:29.215 #define SPDK_CONFIG_SHARED 1 00:06:29.215 #undef SPDK_CONFIG_SMA 00:06:29.215 #define SPDK_CONFIG_TESTS 1 00:06:29.215 #undef SPDK_CONFIG_TSAN 00:06:29.215 #define SPDK_CONFIG_UBLK 1 00:06:29.215 #define SPDK_CONFIG_UBSAN 1 00:06:29.215 #undef SPDK_CONFIG_UNIT_TESTS 00:06:29.215 #undef SPDK_CONFIG_URING 00:06:29.215 #define SPDK_CONFIG_URING_PATH 00:06:29.215 #undef SPDK_CONFIG_URING_ZNS 00:06:29.215 #undef SPDK_CONFIG_USDT 00:06:29.215 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:29.215 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:29.215 #define SPDK_CONFIG_VFIO_USER 1 00:06:29.215 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:29.215 #define SPDK_CONFIG_VHOST 1 00:06:29.215 #define SPDK_CONFIG_VIRTIO 1 00:06:29.215 #undef SPDK_CONFIG_VTUNE 00:06:29.215 #define SPDK_CONFIG_VTUNE_DIR 00:06:29.215 #define SPDK_CONFIG_WERROR 1 00:06:29.215 #define SPDK_CONFIG_WPDK_DIR 00:06:29.215 #undef SPDK_CONFIG_XNVME 00:06:29.215 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:29.215 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:29.216 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3679363 ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3679363 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pisdJO 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pisdJO/tests/target /tmp/spdk.pisdJO 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=51419332608 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994725376 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10575392768 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30992650240 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8765440 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996873216 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=491520 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:29.217 * Looking for test storage... 00:06:29.217 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=51419332608 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12789985280 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:29.218 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:29.219 23:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:31.751 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:31.752 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:31.752 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:31.752 Found net devices under 0000:09:00.0: cvl_0_0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:31.752 Found net devices under 0000:09:00.1: cvl_0_1 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:31.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:06:31.752 00:06:31.752 --- 10.0.0.2 ping statistics --- 00:06:31.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.752 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:06:31.752 00:06:31.752 --- 10.0.0.1 ping statistics --- 00:06:31.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.752 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.752 ************************************ 00:06:31.752 START TEST nvmf_filesystem_no_in_capsule 00:06:31.752 ************************************ 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3680993 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3680993 00:06:31.752 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3680993 ']' 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.753 [2024-07-15 23:33:06.527253] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:31.753 [2024-07-15 23:33:06.527317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.753 [2024-07-15 23:33:06.591288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.753 [2024-07-15 23:33:06.708046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.753 [2024-07-15 23:33:06.708111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.753 [2024-07-15 23:33:06.708125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.753 [2024-07-15 23:33:06.708137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.753 [2024-07-15 23:33:06.708150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.753 [2024-07-15 23:33:06.708204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.753 [2024-07-15 23:33:06.708266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.753 [2024-07-15 23:33:06.708331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.753 [2024-07-15 23:33:06.708333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.753 [2024-07-15 23:33:06.864701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.753 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 Malloc1 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 [2024-07-15 23:33:07.049539] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:32.011 { 00:06:32.011 "name": "Malloc1", 00:06:32.011 "aliases": [ 00:06:32.011 "02b9a409-96fe-49dd-965d-e5ee601e8dda" 00:06:32.011 ], 00:06:32.011 "product_name": "Malloc disk", 00:06:32.011 "block_size": 512, 00:06:32.011 "num_blocks": 1048576, 00:06:32.011 "uuid": "02b9a409-96fe-49dd-965d-e5ee601e8dda", 00:06:32.011 "assigned_rate_limits": { 00:06:32.011 "rw_ios_per_sec": 0, 00:06:32.011 "rw_mbytes_per_sec": 0, 00:06:32.011 "r_mbytes_per_sec": 0, 00:06:32.011 "w_mbytes_per_sec": 0 00:06:32.011 }, 00:06:32.011 "claimed": true, 00:06:32.011 "claim_type": "exclusive_write", 00:06:32.011 "zoned": false, 00:06:32.011 "supported_io_types": { 00:06:32.011 "read": true, 00:06:32.011 "write": true, 00:06:32.011 "unmap": true, 00:06:32.011 "flush": true, 00:06:32.011 "reset": true, 00:06:32.011 "nvme_admin": false, 00:06:32.011 "nvme_io": false, 00:06:32.011 "nvme_io_md": false, 00:06:32.011 "write_zeroes": true, 00:06:32.011 "zcopy": true, 00:06:32.011 "get_zone_info": false, 00:06:32.011 "zone_management": false, 00:06:32.011 "zone_append": false, 00:06:32.011 "compare": false, 00:06:32.011 "compare_and_write": false, 00:06:32.011 "abort": true, 00:06:32.011 "seek_hole": false, 00:06:32.011 "seek_data": false, 00:06:32.011 "copy": true, 00:06:32.011 "nvme_iov_md": false 00:06:32.011 }, 00:06:32.011 "memory_domains": [ 00:06:32.011 { 00:06:32.011 "dma_device_id": "system", 00:06:32.011 "dma_device_type": 1 00:06:32.011 }, 00:06:32.011 { 00:06:32.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.011 "dma_device_type": 2 00:06:32.011 } 00:06:32.011 ], 00:06:32.011 "driver_specific": {} 00:06:32.011 } 00:06:32.011 ]' 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:32.011 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:32.270 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:32.270 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:32.270 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:32.270 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:32.270 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:32.835 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:32.835 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:32.835 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:32.835 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:32.835 23:33:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:34.731 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:34.989 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:35.246 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.616 ************************************ 00:06:36.616 START TEST filesystem_ext4 00:06:36.616 ************************************ 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:36.616 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:36.616 mke2fs 1.46.5 (30-Dec-2021) 00:06:36.616 Discarding device blocks: 0/522240 done 00:06:36.616 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:36.616 Filesystem UUID: 7bad5768-acc8-46e0-bb8b-c965fd880b1d 00:06:36.616 Superblock backups stored on blocks: 00:06:36.616 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:36.616 00:06:36.616 Allocating group tables: 0/64 done 00:06:36.616 Writing inode tables: 0/64 done 00:06:37.178 Creating journal (8192 blocks): done 00:06:38.109 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:06:38.109 00:06:38.109 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:38.109 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3680993 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:39.052 00:06:39.052 real 0m2.575s 00:06:39.052 user 0m0.017s 00:06:39.052 sys 0m0.057s 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:39.052 ************************************ 00:06:39.052 END TEST filesystem_ext4 00:06:39.052 ************************************ 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.052 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.052 ************************************ 00:06:39.052 START TEST filesystem_btrfs 00:06:39.052 ************************************ 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:39.053 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:39.053 btrfs-progs v6.6.2 00:06:39.053 See https://btrfs.readthedocs.io for more information. 00:06:39.053 00:06:39.053 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:39.053 NOTE: several default settings have changed in version 5.15, please make sure 00:06:39.053 this does not affect your deployments: 00:06:39.053 - DUP for metadata (-m dup) 00:06:39.053 - enabled no-holes (-O no-holes) 00:06:39.053 - enabled free-space-tree (-R free-space-tree) 00:06:39.053 00:06:39.053 Label: (null) 00:06:39.053 UUID: 16247185-7a72-4a3b-afc0-8f70c674eb42 00:06:39.053 Node size: 16384 00:06:39.053 Sector size: 4096 00:06:39.053 Filesystem size: 510.00MiB 00:06:39.053 Block group profiles: 00:06:39.053 Data: single 8.00MiB 00:06:39.053 Metadata: DUP 32.00MiB 00:06:39.053 System: DUP 8.00MiB 00:06:39.053 SSD detected: yes 00:06:39.053 Zoned device: no 00:06:39.053 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:39.053 Runtime features: free-space-tree 00:06:39.053 Checksum: crc32c 00:06:39.053 Number of devices: 1 00:06:39.053 Devices: 00:06:39.053 ID SIZE PATH 00:06:39.053 1 510.00MiB /dev/nvme0n1p1 00:06:39.053 00:06:39.053 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:39.053 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3680993 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:39.617 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:39.875 00:06:39.875 real 0m0.775s 00:06:39.875 user 0m0.019s 00:06:39.875 sys 0m0.112s 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:39.875 ************************************ 00:06:39.875 END TEST filesystem_btrfs 00:06:39.875 ************************************ 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.875 ************************************ 00:06:39.875 START TEST filesystem_xfs 00:06:39.875 ************************************ 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:39.875 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:39.876 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:39.876 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:39.876 = sectsz=512 attr=2, projid32bit=1 00:06:39.876 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:39.876 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:39.876 data = bsize=4096 blocks=130560, imaxpct=25 00:06:39.876 = sunit=0 swidth=0 blks 00:06:39.876 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:39.876 log =internal log bsize=4096 blocks=16384, version=2 00:06:39.876 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:39.876 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:40.807 Discarding blocks...Done. 00:06:40.807 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:40.807 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:43.327 00:06:43.327 real 0m3.344s 00:06:43.327 user 0m0.017s 00:06:43.327 sys 0m0.055s 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:43.327 ************************************ 00:06:43.327 END TEST filesystem_xfs 00:06:43.327 ************************************ 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:43.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3680993 ']' 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3680993' 00:06:43.327 killing process with pid 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3680993 00:06:43.327 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3680993 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:43.892 00:06:43.892 real 0m12.376s 00:06:43.892 user 0m47.411s 00:06:43.892 sys 0m1.799s 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 ************************************ 00:06:43.892 END TEST nvmf_filesystem_no_in_capsule 00:06:43.892 ************************************ 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 ************************************ 00:06:43.892 START TEST nvmf_filesystem_in_capsule 00:06:43.892 ************************************ 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3683186 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3683186 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3683186 ']' 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.892 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 [2024-07-15 23:33:18.954915] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:06:43.892 [2024-07-15 23:33:18.955008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.150 [2024-07-15 23:33:19.018558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.150 [2024-07-15 23:33:19.123298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.150 [2024-07-15 23:33:19.123350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.150 [2024-07-15 23:33:19.123378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.150 [2024-07-15 23:33:19.123389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.150 [2024-07-15 23:33:19.123399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.150 [2024-07-15 23:33:19.123496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.150 [2024-07-15 23:33:19.123556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.150 [2024-07-15 23:33:19.123622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.150 [2024-07-15 23:33:19.123625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.150 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.150 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:44.150 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:44.150 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.150 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 [2024-07-15 23:33:19.284858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 [2024-07-15 23:33:19.469315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:44.407 { 00:06:44.407 "name": "Malloc1", 00:06:44.407 "aliases": [ 00:06:44.407 "53e0fa24-9e24-40b5-815a-31d8cbfd08ac" 00:06:44.407 ], 00:06:44.407 "product_name": "Malloc disk", 00:06:44.407 "block_size": 512, 00:06:44.407 "num_blocks": 1048576, 00:06:44.407 "uuid": "53e0fa24-9e24-40b5-815a-31d8cbfd08ac", 00:06:44.407 "assigned_rate_limits": { 00:06:44.407 "rw_ios_per_sec": 0, 00:06:44.407 "rw_mbytes_per_sec": 0, 00:06:44.407 "r_mbytes_per_sec": 0, 00:06:44.407 "w_mbytes_per_sec": 0 00:06:44.407 }, 00:06:44.407 "claimed": true, 00:06:44.407 "claim_type": "exclusive_write", 00:06:44.407 "zoned": false, 00:06:44.407 "supported_io_types": { 00:06:44.407 "read": true, 00:06:44.407 "write": true, 00:06:44.407 "unmap": true, 00:06:44.407 "flush": true, 00:06:44.407 "reset": true, 00:06:44.407 "nvme_admin": false, 00:06:44.407 "nvme_io": false, 00:06:44.407 "nvme_io_md": false, 00:06:44.407 "write_zeroes": true, 00:06:44.407 "zcopy": true, 00:06:44.407 "get_zone_info": false, 00:06:44.407 "zone_management": false, 00:06:44.407 "zone_append": false, 00:06:44.407 "compare": false, 00:06:44.407 "compare_and_write": false, 00:06:44.407 "abort": true, 00:06:44.407 "seek_hole": false, 00:06:44.407 "seek_data": false, 00:06:44.407 "copy": true, 00:06:44.407 "nvme_iov_md": false 00:06:44.407 }, 00:06:44.407 "memory_domains": [ 00:06:44.407 { 00:06:44.407 "dma_device_id": "system", 00:06:44.407 "dma_device_type": 1 00:06:44.407 }, 00:06:44.407 { 00:06:44.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.407 "dma_device_type": 2 00:06:44.407 } 00:06:44.407 ], 00:06:44.407 "driver_specific": {} 00:06:44.407 } 00:06:44.407 ]' 00:06:44.407 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:44.664 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:45.230 23:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:45.230 23:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:45.230 23:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:45.230 23:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:45.230 23:33:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:47.128 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:47.386 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:47.644 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:47.901 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.833 ************************************ 00:06:48.833 START TEST filesystem_in_capsule_ext4 00:06:48.833 ************************************ 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:48.833 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:48.833 mke2fs 1.46.5 (30-Dec-2021) 00:06:49.091 Discarding device blocks: 0/522240 done 00:06:49.091 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:49.091 Filesystem UUID: 703db206-1724-462c-8d1a-64eabf1423f4 00:06:49.091 Superblock backups stored on blocks: 00:06:49.091 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:49.091 00:06:49.091 Allocating group tables: 0/64 done 00:06:49.091 Writing inode tables: 0/64 done 00:06:50.022 Creating journal (8192 blocks): done 00:06:50.588 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:50.588 00:06:50.588 23:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:50.588 23:33:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3683186 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.520 00:06:51.520 real 0m2.664s 00:06:51.520 user 0m0.021s 00:06:51.520 sys 0m0.054s 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:51.520 ************************************ 00:06:51.520 END TEST filesystem_in_capsule_ext4 00:06:51.520 ************************************ 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.520 ************************************ 00:06:51.520 START TEST filesystem_in_capsule_btrfs 00:06:51.520 ************************************ 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:51.520 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:51.521 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:51.521 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:51.521 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:51.779 btrfs-progs v6.6.2 00:06:51.779 See https://btrfs.readthedocs.io for more information. 00:06:51.779 00:06:51.779 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:51.779 NOTE: several default settings have changed in version 5.15, please make sure 00:06:51.779 this does not affect your deployments: 00:06:51.779 - DUP for metadata (-m dup) 00:06:51.779 - enabled no-holes (-O no-holes) 00:06:51.779 - enabled free-space-tree (-R free-space-tree) 00:06:51.779 00:06:51.779 Label: (null) 00:06:51.779 UUID: a027c430-4e8c-44b4-bcfb-75256a9c2e6b 00:06:51.779 Node size: 16384 00:06:51.779 Sector size: 4096 00:06:51.779 Filesystem size: 510.00MiB 00:06:51.779 Block group profiles: 00:06:51.779 Data: single 8.00MiB 00:06:51.779 Metadata: DUP 32.00MiB 00:06:51.779 System: DUP 8.00MiB 00:06:51.779 SSD detected: yes 00:06:51.779 Zoned device: no 00:06:51.779 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:51.779 Runtime features: free-space-tree 00:06:51.779 Checksum: crc32c 00:06:51.779 Number of devices: 1 00:06:51.779 Devices: 00:06:51.779 ID SIZE PATH 00:06:51.779 1 510.00MiB /dev/nvme0n1p1 00:06:51.779 00:06:51.779 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:51.779 23:33:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3683186 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.712 00:06:52.712 real 0m0.983s 00:06:52.712 user 0m0.015s 00:06:52.712 sys 0m0.121s 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:52.712 ************************************ 00:06:52.712 END TEST filesystem_in_capsule_btrfs 00:06:52.712 ************************************ 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.712 ************************************ 00:06:52.712 START TEST filesystem_in_capsule_xfs 00:06:52.712 ************************************ 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:52.712 23:33:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:52.712 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:52.712 = sectsz=512 attr=2, projid32bit=1 00:06:52.712 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:52.712 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:52.712 data = bsize=4096 blocks=130560, imaxpct=25 00:06:52.712 = sunit=0 swidth=0 blks 00:06:52.712 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:52.712 log =internal log bsize=4096 blocks=16384, version=2 00:06:52.712 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:52.712 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:53.645 Discarding blocks...Done. 00:06:53.645 23:33:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:53.645 23:33:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3683186 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.543 00:06:55.543 real 0m2.619s 00:06:55.543 user 0m0.009s 00:06:55.543 sys 0m0.065s 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:55.543 ************************************ 00:06:55.543 END TEST filesystem_in_capsule_xfs 00:06:55.543 ************************************ 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:55.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:55.543 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:55.544 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:55.544 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:55.544 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.544 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:55.544 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3683186 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3683186 ']' 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3683186 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3683186 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3683186' 00:06:55.802 killing process with pid 3683186 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3683186 00:06:55.802 23:33:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3683186 00:06:56.061 23:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:56.061 00:06:56.061 real 0m12.272s 00:06:56.061 user 0m47.048s 00:06:56.061 sys 0m1.803s 00:06:56.061 23:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.061 23:33:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.061 ************************************ 00:06:56.061 END TEST nvmf_filesystem_in_capsule 00:06:56.061 ************************************ 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.320 rmmod nvme_tcp 00:06:56.320 rmmod nvme_fabrics 00:06:56.320 rmmod nvme_keyring 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.320 23:33:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.229 23:33:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.229 00:06:58.229 real 0m29.197s 00:06:58.229 user 1m35.384s 00:06:58.229 sys 0m5.234s 00:06:58.229 23:33:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.229 23:33:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.229 ************************************ 00:06:58.229 END TEST nvmf_filesystem 00:06:58.229 ************************************ 00:06:58.229 23:33:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:58.229 23:33:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:58.229 23:33:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.229 23:33:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.229 23:33:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.524 ************************************ 00:06:58.524 START TEST nvmf_target_discovery 00:06:58.524 ************************************ 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:58.524 * Looking for test storage... 00:06:58.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.524 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.525 23:33:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:00.432 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:00.432 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:00.432 Found net devices under 0000:09:00.0: cvl_0_0 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:00.432 Found net devices under 0000:09:00.1: cvl_0_1 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.432 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.433 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:07:00.691 00:07:00.691 --- 10.0.0.2 ping statistics --- 00:07:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.691 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:00.691 00:07:00.691 --- 10.0.0.1 ping statistics --- 00:07:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.691 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3686671 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3686671 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3686671 ']' 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.691 23:33:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.691 [2024-07-15 23:33:35.706967] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:07:00.691 [2024-07-15 23:33:35.707048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.691 [2024-07-15 23:33:35.771414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.950 [2024-07-15 23:33:35.882121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.950 [2024-07-15 23:33:35.882176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.950 [2024-07-15 23:33:35.882198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.950 [2024-07-15 23:33:35.882210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.950 [2024-07-15 23:33:35.882220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.950 [2024-07-15 23:33:35.882364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.950 [2024-07-15 23:33:35.882417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.950 [2024-07-15 23:33:35.882442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.950 [2024-07-15 23:33:35.882444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.950 [2024-07-15 23:33:36.037730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.950 Null1 00:07:00.950 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.951 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 [2024-07-15 23:33:36.078070] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 Null2 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 Null3 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 Null4 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.209 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:07:01.210 00:07:01.210 Discovery Log Number of Records 6, Generation counter 6 00:07:01.210 =====Discovery Log Entry 0====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: current discovery subsystem 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4420 00:07:01.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: explicit discovery connections, duplicate discovery information 00:07:01.210 sectype: none 00:07:01.210 =====Discovery Log Entry 1====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: nvme subsystem 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4420 00:07:01.210 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: none 00:07:01.210 sectype: none 00:07:01.210 =====Discovery Log Entry 2====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: nvme subsystem 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4420 00:07:01.210 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: none 00:07:01.210 sectype: none 00:07:01.210 =====Discovery Log Entry 3====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: nvme subsystem 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4420 00:07:01.210 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: none 00:07:01.210 sectype: none 00:07:01.210 =====Discovery Log Entry 4====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: nvme subsystem 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4420 00:07:01.210 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: none 00:07:01.210 sectype: none 00:07:01.210 =====Discovery Log Entry 5====== 00:07:01.210 trtype: tcp 00:07:01.210 adrfam: ipv4 00:07:01.210 subtype: discovery subsystem referral 00:07:01.210 treq: not required 00:07:01.210 portid: 0 00:07:01.210 trsvcid: 4430 00:07:01.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:01.210 traddr: 10.0.0.2 00:07:01.210 eflags: none 00:07:01.210 sectype: none 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:01.210 Perform nvmf subsystem discovery via RPC 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.210 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.468 [ 00:07:01.468 { 00:07:01.468 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:01.468 "subtype": "Discovery", 00:07:01.468 "listen_addresses": [ 00:07:01.468 { 00:07:01.468 "trtype": "TCP", 00:07:01.468 "adrfam": "IPv4", 00:07:01.468 "traddr": "10.0.0.2", 00:07:01.468 "trsvcid": "4420" 00:07:01.468 } 00:07:01.468 ], 00:07:01.468 "allow_any_host": true, 00:07:01.468 "hosts": [] 00:07:01.468 }, 00:07:01.468 { 00:07:01.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:01.468 "subtype": "NVMe", 00:07:01.468 "listen_addresses": [ 00:07:01.468 { 00:07:01.468 "trtype": "TCP", 00:07:01.468 "adrfam": "IPv4", 00:07:01.468 "traddr": "10.0.0.2", 00:07:01.468 "trsvcid": "4420" 00:07:01.468 } 00:07:01.468 ], 00:07:01.468 "allow_any_host": true, 00:07:01.468 "hosts": [], 00:07:01.468 "serial_number": "SPDK00000000000001", 00:07:01.468 "model_number": "SPDK bdev Controller", 00:07:01.468 "max_namespaces": 32, 00:07:01.468 "min_cntlid": 1, 00:07:01.468 "max_cntlid": 65519, 00:07:01.468 "namespaces": [ 00:07:01.468 { 00:07:01.468 "nsid": 1, 00:07:01.468 "bdev_name": "Null1", 00:07:01.468 "name": "Null1", 00:07:01.468 "nguid": "2AD71A7B970D4ACC9267ABAC164DBBCA", 00:07:01.468 "uuid": "2ad71a7b-970d-4acc-9267-abac164dbbca" 00:07:01.468 } 00:07:01.468 ] 00:07:01.468 }, 00:07:01.468 { 00:07:01.468 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:01.468 "subtype": "NVMe", 00:07:01.468 "listen_addresses": [ 00:07:01.468 { 00:07:01.468 "trtype": "TCP", 00:07:01.468 "adrfam": "IPv4", 00:07:01.468 "traddr": "10.0.0.2", 00:07:01.468 "trsvcid": "4420" 00:07:01.468 } 00:07:01.468 ], 00:07:01.468 "allow_any_host": true, 00:07:01.468 "hosts": [], 00:07:01.468 "serial_number": "SPDK00000000000002", 00:07:01.468 "model_number": "SPDK bdev Controller", 00:07:01.468 "max_namespaces": 32, 00:07:01.468 "min_cntlid": 1, 00:07:01.468 "max_cntlid": 65519, 00:07:01.468 "namespaces": [ 00:07:01.468 { 00:07:01.468 "nsid": 1, 00:07:01.468 "bdev_name": "Null2", 00:07:01.468 "name": "Null2", 00:07:01.468 "nguid": "D8A35F4C1E4F42D2B86CD0CB71AE6217", 00:07:01.468 "uuid": "d8a35f4c-1e4f-42d2-b86c-d0cb71ae6217" 00:07:01.468 } 00:07:01.468 ] 00:07:01.468 }, 00:07:01.468 { 00:07:01.468 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:01.468 "subtype": "NVMe", 00:07:01.468 "listen_addresses": [ 00:07:01.468 { 00:07:01.468 "trtype": "TCP", 00:07:01.468 "adrfam": "IPv4", 00:07:01.468 "traddr": "10.0.0.2", 00:07:01.468 "trsvcid": "4420" 00:07:01.468 } 00:07:01.468 ], 00:07:01.468 "allow_any_host": true, 00:07:01.468 "hosts": [], 00:07:01.469 "serial_number": "SPDK00000000000003", 00:07:01.469 "model_number": "SPDK bdev Controller", 00:07:01.469 "max_namespaces": 32, 00:07:01.469 "min_cntlid": 1, 00:07:01.469 "max_cntlid": 65519, 00:07:01.469 "namespaces": [ 00:07:01.469 { 00:07:01.469 "nsid": 1, 00:07:01.469 "bdev_name": "Null3", 00:07:01.469 "name": "Null3", 00:07:01.469 "nguid": "D1B2A10F7E824604BEAC2712DAF6A90A", 00:07:01.469 "uuid": "d1b2a10f-7e82-4604-beac-2712daf6a90a" 00:07:01.469 } 00:07:01.469 ] 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:01.469 "subtype": "NVMe", 00:07:01.469 "listen_addresses": [ 00:07:01.469 { 00:07:01.469 "trtype": "TCP", 00:07:01.469 "adrfam": "IPv4", 00:07:01.469 "traddr": "10.0.0.2", 00:07:01.469 "trsvcid": "4420" 00:07:01.469 } 00:07:01.469 ], 00:07:01.469 "allow_any_host": true, 00:07:01.469 "hosts": [], 00:07:01.469 "serial_number": "SPDK00000000000004", 00:07:01.469 "model_number": "SPDK bdev Controller", 00:07:01.469 "max_namespaces": 32, 00:07:01.469 "min_cntlid": 1, 00:07:01.469 "max_cntlid": 65519, 00:07:01.469 "namespaces": [ 00:07:01.469 { 00:07:01.469 "nsid": 1, 00:07:01.469 "bdev_name": "Null4", 00:07:01.469 "name": "Null4", 00:07:01.469 "nguid": "6467A80345A241AFB47774C8B61592F5", 00:07:01.469 "uuid": "6467a803-45a2-41af-b477-74c8b61592f5" 00:07:01.469 } 00:07:01.469 ] 00:07:01.469 } 00:07:01.469 ] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:01.469 rmmod nvme_tcp 00:07:01.469 rmmod nvme_fabrics 00:07:01.469 rmmod nvme_keyring 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3686671 ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3686671 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3686671 ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3686671 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3686671 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3686671' 00:07:01.469 killing process with pid 3686671 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3686671 00:07:01.469 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3686671 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.729 23:33:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.263 23:33:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.263 00:07:04.263 real 0m5.500s 00:07:04.263 user 0m4.380s 00:07:04.263 sys 0m1.884s 00:07:04.263 23:33:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.263 23:33:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:04.263 ************************************ 00:07:04.263 END TEST nvmf_target_discovery 00:07:04.263 ************************************ 00:07:04.263 23:33:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.263 23:33:38 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:04.263 23:33:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.263 23:33:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.263 23:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.263 ************************************ 00:07:04.263 START TEST nvmf_referrals 00:07:04.263 ************************************ 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:04.263 * Looking for test storage... 00:07:04.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.263 23:33:38 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:04.264 23:33:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:06.167 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:06.167 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:06.167 Found net devices under 0000:09:00.0: cvl_0_0 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:06.167 Found net devices under 0000:09:00.1: cvl_0_1 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.167 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:06.168 00:07:06.168 --- 10.0.0.2 ping statistics --- 00:07:06.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.168 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:06.168 00:07:06.168 --- 10.0.0.1 ping statistics --- 00:07:06.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.168 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3688768 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3688768 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3688768 ']' 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.168 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.426 [2024-07-15 23:33:41.306073] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:07:06.426 [2024-07-15 23:33:41.306157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.426 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.426 [2024-07-15 23:33:41.369428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.426 [2024-07-15 23:33:41.470238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.426 [2024-07-15 23:33:41.470294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.426 [2024-07-15 23:33:41.470321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.426 [2024-07-15 23:33:41.470332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.426 [2024-07-15 23:33:41.470342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.426 [2024-07-15 23:33:41.470436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.426 [2024-07-15 23:33:41.470544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.426 [2024-07-15 23:33:41.470615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.426 [2024-07-15 23:33:41.470617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 [2024-07-15 23:33:41.628806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 [2024-07-15 23:33:41.641042] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.683 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.940 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.197 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:07.454 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:07.455 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.455 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.455 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.455 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:07.455 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:07.711 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.712 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:07.986 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.986 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.243 rmmod nvme_tcp 00:07:08.243 rmmod nvme_fabrics 00:07:08.243 rmmod nvme_keyring 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3688768 ']' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3688768 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3688768 ']' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3688768 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3688768 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3688768' 00:07:08.243 killing process with pid 3688768 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3688768 00:07:08.243 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3688768 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.501 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.032 23:33:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.032 00:07:11.032 real 0m6.688s 00:07:11.032 user 0m9.503s 00:07:11.032 sys 0m2.197s 00:07:11.032 23:33:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.032 23:33:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.032 ************************************ 00:07:11.032 END TEST nvmf_referrals 00:07:11.032 ************************************ 00:07:11.032 23:33:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:11.032 23:33:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:11.032 23:33:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:11.032 23:33:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.032 23:33:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.032 ************************************ 00:07:11.032 START TEST nvmf_connect_disconnect 00:07:11.032 ************************************ 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:11.032 * Looking for test storage... 00:07:11.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.032 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:12.931 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:12.931 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:12.931 Found net devices under 0000:09:00.0: cvl_0_0 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:12.931 Found net devices under 0000:09:00.1: cvl_0_1 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:07:12.931 00:07:12.931 --- 10.0.0.2 ping statistics --- 00:07:12.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.931 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:07:12.931 00:07:12.931 --- 10.0.0.1 ping statistics --- 00:07:12.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.931 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:12.931 23:33:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.931 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.931 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.931 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3691065 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3691065 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3691065 ']' 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.932 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.189 [2024-07-15 23:33:48.077360] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:07:13.189 [2024-07-15 23:33:48.077444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.189 [2024-07-15 23:33:48.142385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.189 [2024-07-15 23:33:48.253765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.189 [2024-07-15 23:33:48.253826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.189 [2024-07-15 23:33:48.253854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.189 [2024-07-15 23:33:48.253866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.189 [2024-07-15 23:33:48.253876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.189 [2024-07-15 23:33:48.253968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.189 [2024-07-15 23:33:48.254026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.189 [2024-07-15 23:33:48.254094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.189 [2024-07-15 23:33:48.254097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.446 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.446 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:13.446 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.446 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.446 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 [2024-07-15 23:33:48.414783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 [2024-07-15 23:33:48.466640] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:13.447 23:33:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:16.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.625 rmmod nvme_tcp 00:07:27.625 rmmod nvme_fabrics 00:07:27.625 rmmod nvme_keyring 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3691065 ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3691065 ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3691065' 00:07:27.625 killing process with pid 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3691065 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.625 23:34:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.532 23:34:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.532 00:07:29.532 real 0m18.835s 00:07:29.532 user 0m56.157s 00:07:29.532 sys 0m3.346s 00:07:29.532 23:34:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.532 23:34:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:29.532 ************************************ 00:07:29.532 END TEST nvmf_connect_disconnect 00:07:29.532 ************************************ 00:07:29.532 23:34:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.532 23:34:04 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:29.532 23:34:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.532 23:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.532 23:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.532 ************************************ 00:07:29.532 START TEST nvmf_multitarget 00:07:29.532 ************************************ 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:29.532 * Looking for test storage... 00:07:29.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:29.532 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.533 23:34:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.067 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:32.068 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:32.068 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:32.068 Found net devices under 0000:09:00.0: cvl_0_0 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:32.068 Found net devices under 0000:09:00.1: cvl_0_1 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:07:32.068 00:07:32.068 --- 10.0.0.2 ping statistics --- 00:07:32.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.068 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:07:32.068 00:07:32.068 --- 10.0.0.1 ping statistics --- 00:07:32.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.068 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3694818 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3694818 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3694818 ']' 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.068 23:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.068 [2024-07-15 23:34:06.874218] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:07:32.068 [2024-07-15 23:34:06.874317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.068 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.068 [2024-07-15 23:34:06.940779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.068 [2024-07-15 23:34:07.050744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.068 [2024-07-15 23:34:07.050803] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.068 [2024-07-15 23:34:07.050832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.068 [2024-07-15 23:34:07.050843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.068 [2024-07-15 23:34:07.050852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.068 [2024-07-15 23:34:07.050905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.068 [2024-07-15 23:34:07.050967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.068 [2024-07-15 23:34:07.051027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.068 [2024-07-15 23:34:07.051031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.068 23:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.068 23:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:32.068 23:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.068 23:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.068 23:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:32.326 "nvmf_tgt_1" 00:07:32.326 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:32.583 "nvmf_tgt_2" 00:07:32.583 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.583 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:32.583 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:32.583 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:32.841 true 00:07:32.841 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:32.841 true 00:07:32.841 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.841 23:34:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.099 rmmod nvme_tcp 00:07:33.099 rmmod nvme_fabrics 00:07:33.099 rmmod nvme_keyring 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3694818 ']' 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3694818 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3694818 ']' 00:07:33.099 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3694818 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3694818 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3694818' 00:07:33.100 killing process with pid 3694818 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3694818 00:07:33.100 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3694818 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.359 23:34:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.267 23:34:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.267 00:07:35.267 real 0m5.857s 00:07:35.267 user 0m6.625s 00:07:35.267 sys 0m1.952s 00:07:35.267 23:34:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.267 23:34:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:35.267 ************************************ 00:07:35.267 END TEST nvmf_multitarget 00:07:35.267 ************************************ 00:07:35.525 23:34:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.526 23:34:10 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:35.526 23:34:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.526 23:34:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.526 23:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.526 ************************************ 00:07:35.526 START TEST nvmf_rpc 00:07:35.526 ************************************ 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:35.526 * Looking for test storage... 00:07:35.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.526 23:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.058 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:38.059 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:38.059 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:38.059 Found net devices under 0000:09:00.0: cvl_0_0 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:38.059 Found net devices under 0000:09:00.1: cvl_0_1 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:38.059 00:07:38.059 --- 10.0.0.2 ping statistics --- 00:07:38.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.059 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:07:38.059 00:07:38.059 --- 10.0.0.1 ping statistics --- 00:07:38.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.059 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3696923 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3696923 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3696923 ']' 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.059 23:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.059 [2024-07-15 23:34:12.865324] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:07:38.059 [2024-07-15 23:34:12.865405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.059 [2024-07-15 23:34:12.928450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.059 [2024-07-15 23:34:13.029360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.059 [2024-07-15 23:34:13.029415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.059 [2024-07-15 23:34:13.029442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.059 [2024-07-15 23:34:13.029453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.059 [2024-07-15 23:34:13.029461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.059 [2024-07-15 23:34:13.029550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.059 [2024-07-15 23:34:13.029607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.059 [2024-07-15 23:34:13.029718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.059 [2024-07-15 23:34:13.029726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.059 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.059 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:38.059 23:34:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.059 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.060 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:38.317 "tick_rate": 2700000000, 00:07:38.317 "poll_groups": [ 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_000", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_001", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_002", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_003", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [] 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 }' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 [2024-07-15 23:34:13.268084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:38.317 "tick_rate": 2700000000, 00:07:38.317 "poll_groups": [ 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_000", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [ 00:07:38.317 { 00:07:38.317 "trtype": "TCP" 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_001", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [ 00:07:38.317 { 00:07:38.317 "trtype": "TCP" 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_002", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [ 00:07:38.317 { 00:07:38.317 "trtype": "TCP" 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 }, 00:07:38.317 { 00:07:38.317 "name": "nvmf_tgt_poll_group_003", 00:07:38.317 "admin_qpairs": 0, 00:07:38.317 "io_qpairs": 0, 00:07:38.317 "current_admin_qpairs": 0, 00:07:38.317 "current_io_qpairs": 0, 00:07:38.317 "pending_bdev_io": 0, 00:07:38.317 "completed_nvme_io": 0, 00:07:38.317 "transports": [ 00:07:38.317 { 00:07:38.317 "trtype": "TCP" 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 } 00:07:38.317 ] 00:07:38.317 }' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:38.317 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 Malloc1 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 [2024-07-15 23:34:13.406839] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:38.318 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:07:38.318 [2024-07-15 23:34:13.429289] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:38.574 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:38.574 could not add new controller: failed to write to nvme-fabrics device 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.574 23:34:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.137 23:34:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.137 23:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:39.137 23:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.137 23:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:39.137 23:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:41.033 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:41.033 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:41.033 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.291 [2024-07-15 23:34:16.259367] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:07:41.291 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:41.291 could not add new controller: failed to write to nvme-fabrics device 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.291 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.292 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.292 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.858 23:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.858 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.858 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.858 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.858 23:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:44.384 23:34:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.384 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.384 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:44.384 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:44.384 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 [2024-07-15 23:34:19.074346] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.385 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.948 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.948 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.948 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.948 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:44.948 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:46.839 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 [2024-07-15 23:34:21.921254] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.840 23:34:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.417 23:34:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.417 23:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:47.417 23:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.417 23:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:47.417 23:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:49.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 [2024-07-15 23:34:24.693199] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.938 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.504 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.504 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:50.504 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.504 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:50.504 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:52.400 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 [2024-07-15 23:34:27.589820] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.658 23:34:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.223 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.223 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:53.223 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.223 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:53.223 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:55.118 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:55.119 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 [2024-07-15 23:34:30.365985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.377 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.942 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.942 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.942 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.942 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.942 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:58.470 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 [2024-07-15 23:34:33.122344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 [2024-07-15 23:34:33.170423] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 [2024-07-15 23:34:33.218578] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 [2024-07-15 23:34:33.266748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:58.470 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 [2024-07-15 23:34:33.314903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:58.471 "tick_rate": 2700000000, 00:07:58.471 "poll_groups": [ 00:07:58.471 { 00:07:58.471 "name": "nvmf_tgt_poll_group_000", 00:07:58.471 "admin_qpairs": 2, 00:07:58.471 "io_qpairs": 84, 00:07:58.471 "current_admin_qpairs": 0, 00:07:58.471 "current_io_qpairs": 0, 00:07:58.471 "pending_bdev_io": 0, 00:07:58.471 "completed_nvme_io": 91, 00:07:58.471 "transports": [ 00:07:58.471 { 00:07:58.471 "trtype": "TCP" 00:07:58.471 } 00:07:58.471 ] 00:07:58.471 }, 00:07:58.471 { 00:07:58.471 "name": "nvmf_tgt_poll_group_001", 00:07:58.471 "admin_qpairs": 2, 00:07:58.471 "io_qpairs": 84, 00:07:58.471 "current_admin_qpairs": 0, 00:07:58.471 "current_io_qpairs": 0, 00:07:58.471 "pending_bdev_io": 0, 00:07:58.471 "completed_nvme_io": 185, 00:07:58.471 "transports": [ 00:07:58.471 { 00:07:58.471 "trtype": "TCP" 00:07:58.471 } 00:07:58.471 ] 00:07:58.471 }, 00:07:58.471 { 00:07:58.471 "name": "nvmf_tgt_poll_group_002", 00:07:58.471 "admin_qpairs": 1, 00:07:58.471 "io_qpairs": 84, 00:07:58.471 "current_admin_qpairs": 0, 00:07:58.471 "current_io_qpairs": 0, 00:07:58.471 "pending_bdev_io": 0, 00:07:58.471 "completed_nvme_io": 182, 00:07:58.471 "transports": [ 00:07:58.471 { 00:07:58.471 "trtype": "TCP" 00:07:58.471 } 00:07:58.471 ] 00:07:58.471 }, 00:07:58.471 { 00:07:58.471 "name": "nvmf_tgt_poll_group_003", 00:07:58.471 "admin_qpairs": 2, 00:07:58.471 "io_qpairs": 84, 00:07:58.471 "current_admin_qpairs": 0, 00:07:58.471 "current_io_qpairs": 0, 00:07:58.471 "pending_bdev_io": 0, 00:07:58.471 "completed_nvme_io": 228, 00:07:58.471 "transports": [ 00:07:58.471 { 00:07:58.471 "trtype": "TCP" 00:07:58.471 } 00:07:58.471 ] 00:07:58.471 } 00:07:58.471 ] 00:07:58.471 }' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.471 rmmod nvme_tcp 00:07:58.471 rmmod nvme_fabrics 00:07:58.471 rmmod nvme_keyring 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3696923 ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3696923 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3696923 ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3696923 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3696923 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3696923' 00:07:58.471 killing process with pid 3696923 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3696923 00:07:58.471 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3696923 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.730 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.266 23:34:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.266 00:08:01.266 real 0m25.434s 00:08:01.266 user 1m22.236s 00:08:01.266 sys 0m4.242s 00:08:01.266 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.266 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.266 ************************************ 00:08:01.266 END TEST nvmf_rpc 00:08:01.266 ************************************ 00:08:01.267 23:34:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.267 23:34:35 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:01.267 23:34:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.267 23:34:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.267 23:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.267 ************************************ 00:08:01.267 START TEST nvmf_invalid 00:08:01.267 ************************************ 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:01.267 * Looking for test storage... 00:08:01.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.267 23:34:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.267 23:34:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.168 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:03.169 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:03.169 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:03.169 Found net devices under 0000:09:00.0: cvl_0_0 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:03.169 Found net devices under 0000:09:00.1: cvl_0_1 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:08:03.169 00:08:03.169 --- 10.0.0.2 ping statistics --- 00:08:03.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.169 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:03.169 00:08:03.169 --- 10.0.0.1 ping statistics --- 00:08:03.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.169 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3701424 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3701424 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3701424 ']' 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.169 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 [2024-07-15 23:34:38.281151] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:08:03.169 [2024-07-15 23:34:38.281232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.428 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.428 [2024-07-15 23:34:38.344661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.428 [2024-07-15 23:34:38.444276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.428 [2024-07-15 23:34:38.444330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.428 [2024-07-15 23:34:38.444359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.428 [2024-07-15 23:34:38.444370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.428 [2024-07-15 23:34:38.444379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.428 [2024-07-15 23:34:38.444477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.428 [2024-07-15 23:34:38.444537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.428 [2024-07-15 23:34:38.444614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.428 [2024-07-15 23:34:38.444617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:03.685 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11733 00:08:03.943 [2024-07-15 23:34:38.878643] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:03.943 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:03.943 { 00:08:03.943 "nqn": "nqn.2016-06.io.spdk:cnode11733", 00:08:03.943 "tgt_name": "foobar", 00:08:03.943 "method": "nvmf_create_subsystem", 00:08:03.943 "req_id": 1 00:08:03.943 } 00:08:03.943 Got JSON-RPC error response 00:08:03.943 response: 00:08:03.943 { 00:08:03.943 "code": -32603, 00:08:03.943 "message": "Unable to find target foobar" 00:08:03.943 }' 00:08:03.943 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:03.943 { 00:08:03.943 "nqn": "nqn.2016-06.io.spdk:cnode11733", 00:08:03.943 "tgt_name": "foobar", 00:08:03.943 "method": "nvmf_create_subsystem", 00:08:03.943 "req_id": 1 00:08:03.943 } 00:08:03.943 Got JSON-RPC error response 00:08:03.943 response: 00:08:03.943 { 00:08:03.943 "code": -32603, 00:08:03.943 "message": "Unable to find target foobar" 00:08:03.943 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:03.943 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:03.943 23:34:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22302 00:08:04.210 [2024-07-15 23:34:39.151552] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22302: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:04.210 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:04.210 { 00:08:04.210 "nqn": "nqn.2016-06.io.spdk:cnode22302", 00:08:04.210 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:04.210 "method": "nvmf_create_subsystem", 00:08:04.210 "req_id": 1 00:08:04.210 } 00:08:04.210 Got JSON-RPC error response 00:08:04.210 response: 00:08:04.210 { 00:08:04.210 "code": -32602, 00:08:04.210 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:04.210 }' 00:08:04.210 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:04.210 { 00:08:04.210 "nqn": "nqn.2016-06.io.spdk:cnode22302", 00:08:04.210 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:04.210 "method": "nvmf_create_subsystem", 00:08:04.210 "req_id": 1 00:08:04.210 } 00:08:04.210 Got JSON-RPC error response 00:08:04.210 response: 00:08:04.210 { 00:08:04.210 "code": -32602, 00:08:04.210 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:04.210 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:04.210 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:04.210 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20750 00:08:04.498 [2024-07-15 23:34:39.396362] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20750: invalid model number 'SPDK_Controller' 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:04.498 { 00:08:04.498 "nqn": "nqn.2016-06.io.spdk:cnode20750", 00:08:04.498 "model_number": "SPDK_Controller\u001f", 00:08:04.498 "method": "nvmf_create_subsystem", 00:08:04.498 "req_id": 1 00:08:04.498 } 00:08:04.498 Got JSON-RPC error response 00:08:04.498 response: 00:08:04.498 { 00:08:04.498 "code": -32602, 00:08:04.498 "message": "Invalid MN SPDK_Controller\u001f" 00:08:04.498 }' 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:04.498 { 00:08:04.498 "nqn": "nqn.2016-06.io.spdk:cnode20750", 00:08:04.498 "model_number": "SPDK_Controller\u001f", 00:08:04.498 "method": "nvmf_create_subsystem", 00:08:04.498 "req_id": 1 00:08:04.498 } 00:08:04.498 Got JSON-RPC error response 00:08:04.498 response: 00:08:04.498 { 00:08:04.498 "code": -32602, 00:08:04.498 "message": "Invalid MN SPDK_Controller\u001f" 00:08:04.498 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.498 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z=EPD)T?R$~7p7&tM(nFI' 00:08:04.499 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'z=EPD)T?R$~7p7&tM(nFI' nqn.2016-06.io.spdk:cnode16840 00:08:04.758 [2024-07-15 23:34:39.725471] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16840: invalid serial number 'z=EPD)T?R$~7p7&tM(nFI' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:04.758 { 00:08:04.758 "nqn": "nqn.2016-06.io.spdk:cnode16840", 00:08:04.758 "serial_number": "z=EPD)T?R$~7p7&tM(nFI", 00:08:04.758 "method": "nvmf_create_subsystem", 00:08:04.758 "req_id": 1 00:08:04.758 } 00:08:04.758 Got JSON-RPC error response 00:08:04.758 response: 00:08:04.758 { 00:08:04.758 "code": -32602, 00:08:04.758 "message": "Invalid SN z=EPD)T?R$~7p7&tM(nFI" 00:08:04.758 }' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:04.758 { 00:08:04.758 "nqn": "nqn.2016-06.io.spdk:cnode16840", 00:08:04.758 "serial_number": "z=EPD)T?R$~7p7&tM(nFI", 00:08:04.758 "method": "nvmf_create_subsystem", 00:08:04.758 "req_id": 1 00:08:04.758 } 00:08:04.758 Got JSON-RPC error response 00:08:04.758 response: 00:08:04.758 { 00:08:04.758 "code": -32602, 00:08:04.758 "message": "Invalid SN z=EPD)T?R$~7p7&tM(nFI" 00:08:04.758 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:04.758 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:04.759 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '$:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'\''' 00:08:05.017 23:34:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'\''' nqn.2016-06.io.spdk:cnode6213 00:08:05.274 [2024-07-15 23:34:40.146880] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6213: invalid model number '$:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'' 00:08:05.274 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:05.274 { 00:08:05.274 "nqn": "nqn.2016-06.io.spdk:cnode6213", 00:08:05.274 "model_number": "$:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'\''", 00:08:05.274 "method": "nvmf_create_subsystem", 00:08:05.274 "req_id": 1 00:08:05.274 } 00:08:05.274 Got JSON-RPC error response 00:08:05.274 response: 00:08:05.274 { 00:08:05.274 "code": -32602, 00:08:05.274 "message": "Invalid MN $:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'\''" 00:08:05.274 }' 00:08:05.274 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:05.274 { 00:08:05.274 "nqn": "nqn.2016-06.io.spdk:cnode6213", 00:08:05.274 "model_number": "$:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'", 00:08:05.274 "method": "nvmf_create_subsystem", 00:08:05.274 "req_id": 1 00:08:05.274 } 00:08:05.274 Got JSON-RPC error response 00:08:05.274 response: 00:08:05.274 { 00:08:05.274 "code": -32602, 00:08:05.274 "message": "Invalid MN $:z8gJHQ@us`}Q~V@gj(U5ek1y~:]P5lOo3zyKl]'" 00:08:05.274 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:05.274 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:05.531 [2024-07-15 23:34:40.411796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.531 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:05.788 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:05.788 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:05.788 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:05.788 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:05.788 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:05.788 [2024-07-15 23:34:40.897425] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:06.046 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:06.046 { 00:08:06.046 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:06.046 "listen_address": { 00:08:06.046 "trtype": "tcp", 00:08:06.046 "traddr": "", 00:08:06.046 "trsvcid": "4421" 00:08:06.046 }, 00:08:06.046 "method": "nvmf_subsystem_remove_listener", 00:08:06.046 "req_id": 1 00:08:06.046 } 00:08:06.046 Got JSON-RPC error response 00:08:06.046 response: 00:08:06.046 { 00:08:06.046 "code": -32602, 00:08:06.046 "message": "Invalid parameters" 00:08:06.046 }' 00:08:06.046 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:06.046 { 00:08:06.046 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:06.046 "listen_address": { 00:08:06.046 "trtype": "tcp", 00:08:06.046 "traddr": "", 00:08:06.046 "trsvcid": "4421" 00:08:06.046 }, 00:08:06.046 "method": "nvmf_subsystem_remove_listener", 00:08:06.046 "req_id": 1 00:08:06.046 } 00:08:06.046 Got JSON-RPC error response 00:08:06.046 response: 00:08:06.046 { 00:08:06.046 "code": -32602, 00:08:06.046 "message": "Invalid parameters" 00:08:06.046 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:06.046 23:34:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3712 -i 0 00:08:06.046 [2024-07-15 23:34:41.162286] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3712: invalid cntlid range [0-65519] 00:08:06.303 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:06.303 { 00:08:06.303 "nqn": "nqn.2016-06.io.spdk:cnode3712", 00:08:06.303 "min_cntlid": 0, 00:08:06.303 "method": "nvmf_create_subsystem", 00:08:06.303 "req_id": 1 00:08:06.303 } 00:08:06.303 Got JSON-RPC error response 00:08:06.303 response: 00:08:06.303 { 00:08:06.303 "code": -32602, 00:08:06.303 "message": "Invalid cntlid range [0-65519]" 00:08:06.303 }' 00:08:06.303 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:06.303 { 00:08:06.303 "nqn": "nqn.2016-06.io.spdk:cnode3712", 00:08:06.303 "min_cntlid": 0, 00:08:06.303 "method": "nvmf_create_subsystem", 00:08:06.303 "req_id": 1 00:08:06.303 } 00:08:06.303 Got JSON-RPC error response 00:08:06.303 response: 00:08:06.303 { 00:08:06.303 "code": -32602, 00:08:06.303 "message": "Invalid cntlid range [0-65519]" 00:08:06.303 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:06.303 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21364 -i 65520 00:08:06.303 [2024-07-15 23:34:41.407083] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21364: invalid cntlid range [65520-65519] 00:08:06.303 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:06.303 { 00:08:06.304 "nqn": "nqn.2016-06.io.spdk:cnode21364", 00:08:06.304 "min_cntlid": 65520, 00:08:06.304 "method": "nvmf_create_subsystem", 00:08:06.304 "req_id": 1 00:08:06.304 } 00:08:06.304 Got JSON-RPC error response 00:08:06.304 response: 00:08:06.304 { 00:08:06.304 "code": -32602, 00:08:06.304 "message": "Invalid cntlid range [65520-65519]" 00:08:06.304 }' 00:08:06.304 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:06.304 { 00:08:06.304 "nqn": "nqn.2016-06.io.spdk:cnode21364", 00:08:06.304 "min_cntlid": 65520, 00:08:06.304 "method": "nvmf_create_subsystem", 00:08:06.304 "req_id": 1 00:08:06.304 } 00:08:06.304 Got JSON-RPC error response 00:08:06.304 response: 00:08:06.304 { 00:08:06.304 "code": -32602, 00:08:06.304 "message": "Invalid cntlid range [65520-65519]" 00:08:06.304 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:06.560 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16030 -I 0 00:08:06.560 [2024-07-15 23:34:41.671930] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16030: invalid cntlid range [1-0] 00:08:06.818 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:06.818 { 00:08:06.818 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:08:06.818 "max_cntlid": 0, 00:08:06.818 "method": "nvmf_create_subsystem", 00:08:06.818 "req_id": 1 00:08:06.818 } 00:08:06.818 Got JSON-RPC error response 00:08:06.818 response: 00:08:06.818 { 00:08:06.818 "code": -32602, 00:08:06.818 "message": "Invalid cntlid range [1-0]" 00:08:06.818 }' 00:08:06.818 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:06.818 { 00:08:06.818 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:08:06.818 "max_cntlid": 0, 00:08:06.818 "method": "nvmf_create_subsystem", 00:08:06.818 "req_id": 1 00:08:06.818 } 00:08:06.818 Got JSON-RPC error response 00:08:06.818 response: 00:08:06.818 { 00:08:06.818 "code": -32602, 00:08:06.818 "message": "Invalid cntlid range [1-0]" 00:08:06.818 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:06.818 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23632 -I 65520 00:08:06.818 [2024-07-15 23:34:41.920775] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23632: invalid cntlid range [1-65520] 00:08:06.818 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:06.818 { 00:08:06.818 "nqn": "nqn.2016-06.io.spdk:cnode23632", 00:08:06.818 "max_cntlid": 65520, 00:08:06.818 "method": "nvmf_create_subsystem", 00:08:06.818 "req_id": 1 00:08:06.818 } 00:08:06.818 Got JSON-RPC error response 00:08:06.818 response: 00:08:06.818 { 00:08:06.818 "code": -32602, 00:08:06.818 "message": "Invalid cntlid range [1-65520]" 00:08:06.818 }' 00:08:06.818 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:06.818 { 00:08:06.818 "nqn": "nqn.2016-06.io.spdk:cnode23632", 00:08:06.818 "max_cntlid": 65520, 00:08:06.818 "method": "nvmf_create_subsystem", 00:08:06.818 "req_id": 1 00:08:06.818 } 00:08:06.818 Got JSON-RPC error response 00:08:06.818 response: 00:08:06.818 { 00:08:06.818 "code": -32602, 00:08:06.818 "message": "Invalid cntlid range [1-65520]" 00:08:06.818 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:07.075 23:34:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12205 -i 6 -I 5 00:08:07.075 [2024-07-15 23:34:42.173622] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12205: invalid cntlid range [6-5] 00:08:07.075 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:07.075 { 00:08:07.075 "nqn": "nqn.2016-06.io.spdk:cnode12205", 00:08:07.075 "min_cntlid": 6, 00:08:07.075 "max_cntlid": 5, 00:08:07.075 "method": "nvmf_create_subsystem", 00:08:07.075 "req_id": 1 00:08:07.075 } 00:08:07.075 Got JSON-RPC error response 00:08:07.075 response: 00:08:07.075 { 00:08:07.075 "code": -32602, 00:08:07.075 "message": "Invalid cntlid range [6-5]" 00:08:07.075 }' 00:08:07.075 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:07.075 { 00:08:07.075 "nqn": "nqn.2016-06.io.spdk:cnode12205", 00:08:07.075 "min_cntlid": 6, 00:08:07.075 "max_cntlid": 5, 00:08:07.075 "method": "nvmf_create_subsystem", 00:08:07.075 "req_id": 1 00:08:07.075 } 00:08:07.075 Got JSON-RPC error response 00:08:07.075 response: 00:08:07.075 { 00:08:07.075 "code": -32602, 00:08:07.075 "message": "Invalid cntlid range [6-5]" 00:08:07.075 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:07.075 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:07.333 { 00:08:07.333 "name": "foobar", 00:08:07.333 "method": "nvmf_delete_target", 00:08:07.333 "req_id": 1 00:08:07.333 } 00:08:07.333 Got JSON-RPC error response 00:08:07.333 response: 00:08:07.333 { 00:08:07.333 "code": -32602, 00:08:07.333 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:07.333 }' 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:07.333 { 00:08:07.333 "name": "foobar", 00:08:07.333 "method": "nvmf_delete_target", 00:08:07.333 "req_id": 1 00:08:07.333 } 00:08:07.333 Got JSON-RPC error response 00:08:07.333 response: 00:08:07.333 { 00:08:07.333 "code": -32602, 00:08:07.333 "message": "The specified target doesn't exist, cannot delete it." 00:08:07.333 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.333 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.333 rmmod nvme_tcp 00:08:07.334 rmmod nvme_fabrics 00:08:07.334 rmmod nvme_keyring 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3701424 ']' 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3701424 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3701424 ']' 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3701424 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3701424 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3701424' 00:08:07.334 killing process with pid 3701424 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3701424 00:08:07.334 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3701424 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.593 23:34:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.128 23:34:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.128 00:08:10.128 real 0m8.770s 00:08:10.128 user 0m20.381s 00:08:10.128 sys 0m2.478s 00:08:10.129 23:34:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.129 23:34:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 END TEST nvmf_invalid 00:08:10.129 ************************************ 00:08:10.129 23:34:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.129 23:34:44 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:10.129 23:34:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.129 23:34:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.129 23:34:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 START TEST nvmf_abort 00:08:10.129 ************************************ 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:10.129 * Looking for test storage... 00:08:10.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.129 23:34:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.033 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:12.034 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:12.034 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:12.034 Found net devices under 0000:09:00.0: cvl_0_0 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:12.034 Found net devices under 0000:09:00.1: cvl_0_1 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.034 23:34:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:12.034 00:08:12.034 --- 10.0.0.2 ping statistics --- 00:08:12.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.034 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:08:12.034 00:08:12.034 --- 10.0.0.1 ping statistics --- 00:08:12.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.034 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.034 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3704058 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3704058 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3704058 ']' 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.035 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.035 [2024-07-15 23:34:47.095935] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:08:12.035 [2024-07-15 23:34:47.096077] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.035 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.293 [2024-07-15 23:34:47.161146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.293 [2024-07-15 23:34:47.269805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.293 [2024-07-15 23:34:47.269868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.293 [2024-07-15 23:34:47.269897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.293 [2024-07-15 23:34:47.269907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.293 [2024-07-15 23:34:47.269917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.293 [2024-07-15 23:34:47.270016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.293 [2024-07-15 23:34:47.270088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.293 [2024-07-15 23:34:47.270091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.293 [2024-07-15 23:34:47.410493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.293 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 Malloc0 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 Delay0 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 [2024-07-15 23:34:47.476658] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.551 23:34:47 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:12.551 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.551 [2024-07-15 23:34:47.581929] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:15.070 Initializing NVMe Controllers 00:08:15.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.070 controller IO queue size 128 less than required 00:08:15.070 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:15.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:15.070 Initialization complete. Launching workers. 00:08:15.070 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31208 00:08:15.070 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31269, failed to submit 62 00:08:15.070 success 31212, unsuccess 57, failed 0 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.070 rmmod nvme_tcp 00:08:15.070 rmmod nvme_fabrics 00:08:15.070 rmmod nvme_keyring 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3704058 ']' 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3704058 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3704058 ']' 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3704058 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3704058 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3704058' 00:08:15.070 killing process with pid 3704058 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3704058 00:08:15.070 23:34:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3704058 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.070 23:34:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.974 23:34:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.974 00:08:16.974 real 0m7.310s 00:08:16.975 user 0m10.364s 00:08:16.975 sys 0m2.530s 00:08:16.975 23:34:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.975 23:34:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.975 ************************************ 00:08:16.975 END TEST nvmf_abort 00:08:16.975 ************************************ 00:08:16.975 23:34:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.975 23:34:52 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:16.975 23:34:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.975 23:34:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.975 23:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.233 ************************************ 00:08:17.233 START TEST nvmf_ns_hotplug_stress 00:08:17.233 ************************************ 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:17.233 * Looking for test storage... 00:08:17.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.233 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.234 23:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.134 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.134 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.134 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.134 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:19.395 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:19.395 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:19.395 Found net devices under 0000:09:00.0: cvl_0_0 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:19.395 Found net devices under 0000:09:00.1: cvl_0_1 00:08:19.395 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:19.396 00:08:19.396 --- 10.0.0.2 ping statistics --- 00:08:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.396 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:08:19.396 00:08:19.396 --- 10.0.0.1 ping statistics --- 00:08:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.396 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3706299 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3706299 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3706299 ']' 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.396 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.396 [2024-07-15 23:34:54.481283] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:08:19.396 [2024-07-15 23:34:54.481362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.396 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.655 [2024-07-15 23:34:54.546857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.655 [2024-07-15 23:34:54.647374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.655 [2024-07-15 23:34:54.647430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.655 [2024-07-15 23:34:54.647457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.655 [2024-07-15 23:34:54.647468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.655 [2024-07-15 23:34:54.647478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.655 [2024-07-15 23:34:54.647563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.655 [2024-07-15 23:34:54.647627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.655 [2024-07-15 23:34:54.647630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.655 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.655 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:19.655 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.655 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.655 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.913 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.913 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:19.913 23:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.170 [2024-07-15 23:34:55.059581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.170 23:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.427 23:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.684 [2024-07-15 23:34:55.570427] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.684 23:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.941 23:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:21.198 Malloc0 00:08:21.198 23:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:21.456 Delay0 00:08:21.456 23:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.713 23:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:21.970 NULL1 00:08:21.970 23:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:22.228 23:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3706715 00:08:22.228 23:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:22.228 23:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:22.228 23:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.228 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.601 Read completed with error (sct=0, sc=11) 00:08:23.601 23:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.858 23:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:23.858 23:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:23.858 true 00:08:23.858 23:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:23.858 23:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.788 23:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.046 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:25.046 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:25.313 true 00:08:25.313 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:25.313 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.571 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.829 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:25.829 23:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:26.087 true 00:08:26.087 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:26.087 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.344 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.602 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:26.602 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:26.859 true 00:08:26.859 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:26.859 23:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.791 23:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.048 23:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:28.048 23:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:28.306 true 00:08:28.306 23:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:28.306 23:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.236 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.236 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:29.236 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:29.493 true 00:08:29.493 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:29.493 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.751 23:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.008 23:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:30.008 23:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:30.266 true 00:08:30.266 23:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:30.266 23:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.198 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.455 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:31.455 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:31.711 true 00:08:31.711 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:31.711 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.968 23:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.225 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:32.225 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:32.482 true 00:08:32.482 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:32.482 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.739 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.995 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:32.995 23:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:33.252 true 00:08:33.252 23:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:33.252 23:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 23:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.692 23:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:34.692 23:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:34.949 true 00:08:34.949 23:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:34.949 23:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.515 23:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.773 23:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:35.773 23:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:36.031 true 00:08:36.031 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:36.031 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.288 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.545 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:36.545 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:36.802 true 00:08:36.802 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:36.802 23:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.734 23:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.990 23:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:37.990 23:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:38.248 true 00:08:38.248 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:38.248 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.505 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.762 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:38.762 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:39.026 true 00:08:39.026 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:39.026 23:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.991 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.991 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:39.991 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:40.248 true 00:08:40.248 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:40.248 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.505 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.763 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:40.763 23:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:41.020 true 00:08:41.020 23:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:41.020 23:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.951 23:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.209 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:42.209 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:42.467 true 00:08:42.467 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:42.467 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.725 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.725 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:42.725 23:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:42.983 true 00:08:42.983 23:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:42.983 23:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.173 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.173 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:44.173 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:44.431 true 00:08:44.431 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:44.431 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.688 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.946 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:44.946 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:45.203 true 00:08:45.203 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:45.203 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.393 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.393 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:46.393 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:46.650 true 00:08:46.650 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:46.907 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.907 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.162 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:47.162 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:47.418 true 00:08:47.418 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:47.418 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.348 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.604 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:48.604 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:48.861 true 00:08:48.861 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:48.861 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.118 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.374 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:49.374 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:49.631 true 00:08:49.631 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:49.631 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.561 23:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.819 23:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:50.819 23:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:51.076 true 00:08:51.076 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:51.076 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.333 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.590 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:51.590 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:51.847 true 00:08:51.847 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:51.847 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.776 Initializing NVMe Controllers 00:08:52.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.776 Controller IO queue size 128, less than required. 00:08:52.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.776 Controller IO queue size 128, less than required. 00:08:52.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:52.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:52.776 Initialization complete. Launching workers. 00:08:52.776 ======================================================== 00:08:52.776 Latency(us) 00:08:52.776 Device Information : IOPS MiB/s Average min max 00:08:52.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1113.16 0.54 65170.81 2652.22 1044401.56 00:08:52.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11835.91 5.78 10816.34 3716.57 450188.31 00:08:52.776 ======================================================== 00:08:52.776 Total : 12949.07 6.32 15488.91 2652.22 1044401.56 00:08:52.776 00:08:52.776 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.033 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:53.033 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:53.033 true 00:08:53.289 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3706715 00:08:53.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3706715) - No such process 00:08:53.289 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3706715 00:08:53.289 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.547 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.804 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:53.804 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:53.804 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:53.804 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:53.804 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:53.804 null0 00:08:54.061 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.061 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.061 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:54.061 null1 00:08:54.061 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.061 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.061 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:54.318 null2 00:08:54.318 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.318 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.318 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:54.575 null3 00:08:54.575 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.575 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.575 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:54.831 null4 00:08:54.831 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.831 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.831 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:55.115 null5 00:08:55.115 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:55.115 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:55.116 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:55.377 null6 00:08:55.377 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:55.377 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:55.377 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:55.635 null7 00:08:55.635 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:55.635 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:55.635 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3710768 3710769 3710771 3710773 3710775 3710777 3710779 3710781 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.636 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.895 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.154 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.412 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.670 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.928 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.188 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.447 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.448 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.448 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.448 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.448 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.706 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.965 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.965 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.966 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.224 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.481 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.738 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.995 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.995 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.995 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.996 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.254 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.512 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.513 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.513 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.513 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.513 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.771 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.029 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.287 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.545 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.803 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.061 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.062 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.062 rmmod nvme_tcp 00:09:01.320 rmmod nvme_fabrics 00:09:01.320 rmmod nvme_keyring 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3706299 ']' 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3706299 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3706299 ']' 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3706299 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3706299 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3706299' 00:09:01.320 killing process with pid 3706299 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3706299 00:09:01.320 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3706299 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.580 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.485 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.485 00:09:03.485 real 0m46.464s 00:09:03.485 user 3m30.829s 00:09:03.485 sys 0m16.437s 00:09:03.485 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.485 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.485 ************************************ 00:09:03.485 END TEST nvmf_ns_hotplug_stress 00:09:03.485 ************************************ 00:09:03.485 23:35:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.485 23:35:38 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:03.485 23:35:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.485 23:35:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.485 23:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.743 ************************************ 00:09:03.743 START TEST nvmf_connect_stress 00:09:03.743 ************************************ 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:03.743 * Looking for test storage... 00:09:03.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.743 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.744 23:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:05.647 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:05.647 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:05.647 Found net devices under 0000:09:00.0: cvl_0_0 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:05.647 Found net devices under 0000:09:00.1: cvl_0_1 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.647 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:09:05.907 00:09:05.907 --- 10.0.0.2 ping statistics --- 00:09:05.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.907 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:05.907 00:09:05.907 --- 10.0.0.1 ping statistics --- 00:09:05.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.907 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3713528 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3713528 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3713528 ']' 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.907 23:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.907 [2024-07-15 23:35:40.953804] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:05.907 [2024-07-15 23:35:40.953903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.907 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.907 [2024-07-15 23:35:41.016795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.166 [2024-07-15 23:35:41.125760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.166 [2024-07-15 23:35:41.125821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.166 [2024-07-15 23:35:41.125850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.166 [2024-07-15 23:35:41.125861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.166 [2024-07-15 23:35:41.125871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.166 [2024-07-15 23:35:41.125953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.166 [2024-07-15 23:35:41.126022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.166 [2024-07-15 23:35:41.126026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.166 [2024-07-15 23:35:41.261779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.166 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.166 [2024-07-15 23:35:41.287193] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.424 NULL1 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3713560 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.424 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:06.425 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:06.425 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:06.425 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.425 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.425 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.682 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.682 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:06.682 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.682 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.682 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.940 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.940 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:06.940 23:35:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.940 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.940 23:35:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.197 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.197 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:07.198 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.198 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.198 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.763 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.763 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:07.763 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.763 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.763 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.021 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.021 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:08.021 23:35:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.021 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.021 23:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.279 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.279 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:08.279 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.279 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.279 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.537 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:08.537 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.537 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.537 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.794 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.794 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:08.794 23:35:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.794 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.794 23:35:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.359 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.359 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:09.359 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.359 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.359 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.616 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.616 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:09.616 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.616 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.616 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.873 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.873 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:09.873 23:35:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.873 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.873 23:35:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.130 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.130 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:10.130 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.130 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.130 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.695 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.695 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:10.695 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.695 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.695 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.953 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.953 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:10.953 23:35:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.953 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.953 23:35:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.242 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.242 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:11.242 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.242 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.242 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.500 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.500 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:11.500 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.500 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.500 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.758 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.758 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:11.758 23:35:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.758 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.758 23:35:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.016 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.016 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:12.016 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.016 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.016 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.582 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.582 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:12.582 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.582 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.582 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.839 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.839 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:12.839 23:35:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.839 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.839 23:35:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.096 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.096 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:13.096 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.096 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.096 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.353 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.353 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:13.353 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.353 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.353 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.918 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.918 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:13.918 23:35:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.918 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.918 23:35:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.176 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.176 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:14.176 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.176 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.176 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.433 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.433 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:14.433 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.433 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.433 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.690 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.690 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:14.690 23:35:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.690 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.690 23:35:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.947 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:14.947 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.947 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.947 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.511 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.511 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:15.511 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.511 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.511 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.767 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.767 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:15.767 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.767 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.767 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.024 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.024 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:16.024 23:35:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.024 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.024 23:35:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.281 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.281 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:16.281 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.281 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.281 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.281 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3713560 00:09:16.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3713560) - No such process 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3713560 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.537 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.537 rmmod nvme_tcp 00:09:16.537 rmmod nvme_fabrics 00:09:16.537 rmmod nvme_keyring 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3713528 ']' 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3713528 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3713528 ']' 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3713528 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3713528 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3713528' 00:09:16.795 killing process with pid 3713528 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3713528 00:09:16.795 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3713528 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.053 23:35:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.955 23:35:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.955 00:09:18.955 real 0m15.367s 00:09:18.955 user 0m38.316s 00:09:18.955 sys 0m5.896s 00:09:18.955 23:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.955 23:35:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.955 ************************************ 00:09:18.955 END TEST nvmf_connect_stress 00:09:18.955 ************************************ 00:09:18.955 23:35:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.955 23:35:54 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:18.955 23:35:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.955 23:35:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.955 23:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.955 ************************************ 00:09:18.955 START TEST nvmf_fused_ordering 00:09:18.955 ************************************ 00:09:18.955 23:35:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:19.214 * Looking for test storage... 00:09:19.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.214 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.215 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.215 23:35:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.215 23:35:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.110 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:21.111 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:21.111 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:21.111 Found net devices under 0000:09:00.0: cvl_0_0 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:21.111 Found net devices under 0000:09:00.1: cvl_0_1 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.111 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:21.369 00:09:21.369 --- 10.0.0.2 ping statistics --- 00:09:21.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.369 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:21.369 00:09:21.369 --- 10.0.0.1 ping statistics --- 00:09:21.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.369 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3716709 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3716709 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3716709 ']' 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.369 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.369 [2024-07-15 23:35:56.403042] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:21.369 [2024-07-15 23:35:56.403138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.369 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.369 [2024-07-15 23:35:56.485157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.627 [2024-07-15 23:35:56.618944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.627 [2024-07-15 23:35:56.619032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.627 [2024-07-15 23:35:56.619070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.627 [2024-07-15 23:35:56.619094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.627 [2024-07-15 23:35:56.619115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.627 [2024-07-15 23:35:56.619153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.627 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.627 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:21.627 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.627 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.627 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 [2024-07-15 23:35:56.768877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 [2024-07-15 23:35:56.785090] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 NULL1 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.885 23:35:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:21.885 [2024-07-15 23:35:56.832536] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:21.885 [2024-07-15 23:35:56.832577] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716851 ] 00:09:21.885 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.449 Attached to nqn.2016-06.io.spdk:cnode1 00:09:22.449 Namespace ID: 1 size: 1GB 00:09:22.449 fused_ordering(0) 00:09:22.449 fused_ordering(1) 00:09:22.449 fused_ordering(2) 00:09:22.449 fused_ordering(3) 00:09:22.449 fused_ordering(4) 00:09:22.449 fused_ordering(5) 00:09:22.449 fused_ordering(6) 00:09:22.449 fused_ordering(7) 00:09:22.449 fused_ordering(8) 00:09:22.449 fused_ordering(9) 00:09:22.449 fused_ordering(10) 00:09:22.449 fused_ordering(11) 00:09:22.449 fused_ordering(12) 00:09:22.449 fused_ordering(13) 00:09:22.449 fused_ordering(14) 00:09:22.449 fused_ordering(15) 00:09:22.449 fused_ordering(16) 00:09:22.449 fused_ordering(17) 00:09:22.449 fused_ordering(18) 00:09:22.449 fused_ordering(19) 00:09:22.449 fused_ordering(20) 00:09:22.449 fused_ordering(21) 00:09:22.449 fused_ordering(22) 00:09:22.449 fused_ordering(23) 00:09:22.449 fused_ordering(24) 00:09:22.449 fused_ordering(25) 00:09:22.449 fused_ordering(26) 00:09:22.449 fused_ordering(27) 00:09:22.449 fused_ordering(28) 00:09:22.449 fused_ordering(29) 00:09:22.449 fused_ordering(30) 00:09:22.449 fused_ordering(31) 00:09:22.449 fused_ordering(32) 00:09:22.449 fused_ordering(33) 00:09:22.449 fused_ordering(34) 00:09:22.449 fused_ordering(35) 00:09:22.449 fused_ordering(36) 00:09:22.449 fused_ordering(37) 00:09:22.449 fused_ordering(38) 00:09:22.449 fused_ordering(39) 00:09:22.449 fused_ordering(40) 00:09:22.449 fused_ordering(41) 00:09:22.449 fused_ordering(42) 00:09:22.449 fused_ordering(43) 00:09:22.449 fused_ordering(44) 00:09:22.449 fused_ordering(45) 00:09:22.449 fused_ordering(46) 00:09:22.449 fused_ordering(47) 00:09:22.449 fused_ordering(48) 00:09:22.449 fused_ordering(49) 00:09:22.449 fused_ordering(50) 00:09:22.449 fused_ordering(51) 00:09:22.449 fused_ordering(52) 00:09:22.449 fused_ordering(53) 00:09:22.449 fused_ordering(54) 00:09:22.449 fused_ordering(55) 00:09:22.449 fused_ordering(56) 00:09:22.449 fused_ordering(57) 00:09:22.449 fused_ordering(58) 00:09:22.449 fused_ordering(59) 00:09:22.449 fused_ordering(60) 00:09:22.449 fused_ordering(61) 00:09:22.449 fused_ordering(62) 00:09:22.449 fused_ordering(63) 00:09:22.449 fused_ordering(64) 00:09:22.449 fused_ordering(65) 00:09:22.449 fused_ordering(66) 00:09:22.449 fused_ordering(67) 00:09:22.449 fused_ordering(68) 00:09:22.449 fused_ordering(69) 00:09:22.449 fused_ordering(70) 00:09:22.449 fused_ordering(71) 00:09:22.449 fused_ordering(72) 00:09:22.449 fused_ordering(73) 00:09:22.449 fused_ordering(74) 00:09:22.449 fused_ordering(75) 00:09:22.449 fused_ordering(76) 00:09:22.449 fused_ordering(77) 00:09:22.449 fused_ordering(78) 00:09:22.449 fused_ordering(79) 00:09:22.449 fused_ordering(80) 00:09:22.449 fused_ordering(81) 00:09:22.449 fused_ordering(82) 00:09:22.449 fused_ordering(83) 00:09:22.449 fused_ordering(84) 00:09:22.449 fused_ordering(85) 00:09:22.449 fused_ordering(86) 00:09:22.449 fused_ordering(87) 00:09:22.449 fused_ordering(88) 00:09:22.449 fused_ordering(89) 00:09:22.449 fused_ordering(90) 00:09:22.449 fused_ordering(91) 00:09:22.449 fused_ordering(92) 00:09:22.449 fused_ordering(93) 00:09:22.449 fused_ordering(94) 00:09:22.449 fused_ordering(95) 00:09:22.449 fused_ordering(96) 00:09:22.449 fused_ordering(97) 00:09:22.449 fused_ordering(98) 00:09:22.449 fused_ordering(99) 00:09:22.449 fused_ordering(100) 00:09:22.449 fused_ordering(101) 00:09:22.449 fused_ordering(102) 00:09:22.449 fused_ordering(103) 00:09:22.449 fused_ordering(104) 00:09:22.449 fused_ordering(105) 00:09:22.449 fused_ordering(106) 00:09:22.449 fused_ordering(107) 00:09:22.449 fused_ordering(108) 00:09:22.449 fused_ordering(109) 00:09:22.449 fused_ordering(110) 00:09:22.449 fused_ordering(111) 00:09:22.449 fused_ordering(112) 00:09:22.449 fused_ordering(113) 00:09:22.449 fused_ordering(114) 00:09:22.449 fused_ordering(115) 00:09:22.449 fused_ordering(116) 00:09:22.449 fused_ordering(117) 00:09:22.450 fused_ordering(118) 00:09:22.450 fused_ordering(119) 00:09:22.450 fused_ordering(120) 00:09:22.450 fused_ordering(121) 00:09:22.450 fused_ordering(122) 00:09:22.450 fused_ordering(123) 00:09:22.450 fused_ordering(124) 00:09:22.450 fused_ordering(125) 00:09:22.450 fused_ordering(126) 00:09:22.450 fused_ordering(127) 00:09:22.450 fused_ordering(128) 00:09:22.450 fused_ordering(129) 00:09:22.450 fused_ordering(130) 00:09:22.450 fused_ordering(131) 00:09:22.450 fused_ordering(132) 00:09:22.450 fused_ordering(133) 00:09:22.450 fused_ordering(134) 00:09:22.450 fused_ordering(135) 00:09:22.450 fused_ordering(136) 00:09:22.450 fused_ordering(137) 00:09:22.450 fused_ordering(138) 00:09:22.450 fused_ordering(139) 00:09:22.450 fused_ordering(140) 00:09:22.450 fused_ordering(141) 00:09:22.450 fused_ordering(142) 00:09:22.450 fused_ordering(143) 00:09:22.450 fused_ordering(144) 00:09:22.450 fused_ordering(145) 00:09:22.450 fused_ordering(146) 00:09:22.450 fused_ordering(147) 00:09:22.450 fused_ordering(148) 00:09:22.450 fused_ordering(149) 00:09:22.450 fused_ordering(150) 00:09:22.450 fused_ordering(151) 00:09:22.450 fused_ordering(152) 00:09:22.450 fused_ordering(153) 00:09:22.450 fused_ordering(154) 00:09:22.450 fused_ordering(155) 00:09:22.450 fused_ordering(156) 00:09:22.450 fused_ordering(157) 00:09:22.450 fused_ordering(158) 00:09:22.450 fused_ordering(159) 00:09:22.450 fused_ordering(160) 00:09:22.450 fused_ordering(161) 00:09:22.450 fused_ordering(162) 00:09:22.450 fused_ordering(163) 00:09:22.450 fused_ordering(164) 00:09:22.450 fused_ordering(165) 00:09:22.450 fused_ordering(166) 00:09:22.450 fused_ordering(167) 00:09:22.450 fused_ordering(168) 00:09:22.450 fused_ordering(169) 00:09:22.450 fused_ordering(170) 00:09:22.450 fused_ordering(171) 00:09:22.450 fused_ordering(172) 00:09:22.450 fused_ordering(173) 00:09:22.450 fused_ordering(174) 00:09:22.450 fused_ordering(175) 00:09:22.450 fused_ordering(176) 00:09:22.450 fused_ordering(177) 00:09:22.450 fused_ordering(178) 00:09:22.450 fused_ordering(179) 00:09:22.450 fused_ordering(180) 00:09:22.450 fused_ordering(181) 00:09:22.450 fused_ordering(182) 00:09:22.450 fused_ordering(183) 00:09:22.450 fused_ordering(184) 00:09:22.450 fused_ordering(185) 00:09:22.450 fused_ordering(186) 00:09:22.450 fused_ordering(187) 00:09:22.450 fused_ordering(188) 00:09:22.450 fused_ordering(189) 00:09:22.450 fused_ordering(190) 00:09:22.450 fused_ordering(191) 00:09:22.450 fused_ordering(192) 00:09:22.450 fused_ordering(193) 00:09:22.450 fused_ordering(194) 00:09:22.450 fused_ordering(195) 00:09:22.450 fused_ordering(196) 00:09:22.450 fused_ordering(197) 00:09:22.450 fused_ordering(198) 00:09:22.450 fused_ordering(199) 00:09:22.450 fused_ordering(200) 00:09:22.450 fused_ordering(201) 00:09:22.450 fused_ordering(202) 00:09:22.450 fused_ordering(203) 00:09:22.450 fused_ordering(204) 00:09:22.450 fused_ordering(205) 00:09:22.707 fused_ordering(206) 00:09:22.707 fused_ordering(207) 00:09:22.707 fused_ordering(208) 00:09:22.707 fused_ordering(209) 00:09:22.707 fused_ordering(210) 00:09:22.707 fused_ordering(211) 00:09:22.707 fused_ordering(212) 00:09:22.707 fused_ordering(213) 00:09:22.707 fused_ordering(214) 00:09:22.707 fused_ordering(215) 00:09:22.707 fused_ordering(216) 00:09:22.707 fused_ordering(217) 00:09:22.707 fused_ordering(218) 00:09:22.707 fused_ordering(219) 00:09:22.707 fused_ordering(220) 00:09:22.707 fused_ordering(221) 00:09:22.707 fused_ordering(222) 00:09:22.707 fused_ordering(223) 00:09:22.707 fused_ordering(224) 00:09:22.707 fused_ordering(225) 00:09:22.707 fused_ordering(226) 00:09:22.707 fused_ordering(227) 00:09:22.707 fused_ordering(228) 00:09:22.707 fused_ordering(229) 00:09:22.707 fused_ordering(230) 00:09:22.707 fused_ordering(231) 00:09:22.707 fused_ordering(232) 00:09:22.707 fused_ordering(233) 00:09:22.707 fused_ordering(234) 00:09:22.707 fused_ordering(235) 00:09:22.707 fused_ordering(236) 00:09:22.707 fused_ordering(237) 00:09:22.707 fused_ordering(238) 00:09:22.707 fused_ordering(239) 00:09:22.707 fused_ordering(240) 00:09:22.707 fused_ordering(241) 00:09:22.707 fused_ordering(242) 00:09:22.707 fused_ordering(243) 00:09:22.707 fused_ordering(244) 00:09:22.707 fused_ordering(245) 00:09:22.707 fused_ordering(246) 00:09:22.707 fused_ordering(247) 00:09:22.707 fused_ordering(248) 00:09:22.707 fused_ordering(249) 00:09:22.707 fused_ordering(250) 00:09:22.707 fused_ordering(251) 00:09:22.707 fused_ordering(252) 00:09:22.707 fused_ordering(253) 00:09:22.707 fused_ordering(254) 00:09:22.707 fused_ordering(255) 00:09:22.707 fused_ordering(256) 00:09:22.707 fused_ordering(257) 00:09:22.707 fused_ordering(258) 00:09:22.707 fused_ordering(259) 00:09:22.707 fused_ordering(260) 00:09:22.707 fused_ordering(261) 00:09:22.707 fused_ordering(262) 00:09:22.707 fused_ordering(263) 00:09:22.707 fused_ordering(264) 00:09:22.707 fused_ordering(265) 00:09:22.707 fused_ordering(266) 00:09:22.707 fused_ordering(267) 00:09:22.707 fused_ordering(268) 00:09:22.707 fused_ordering(269) 00:09:22.707 fused_ordering(270) 00:09:22.707 fused_ordering(271) 00:09:22.707 fused_ordering(272) 00:09:22.707 fused_ordering(273) 00:09:22.707 fused_ordering(274) 00:09:22.707 fused_ordering(275) 00:09:22.707 fused_ordering(276) 00:09:22.707 fused_ordering(277) 00:09:22.707 fused_ordering(278) 00:09:22.707 fused_ordering(279) 00:09:22.707 fused_ordering(280) 00:09:22.707 fused_ordering(281) 00:09:22.707 fused_ordering(282) 00:09:22.707 fused_ordering(283) 00:09:22.707 fused_ordering(284) 00:09:22.707 fused_ordering(285) 00:09:22.707 fused_ordering(286) 00:09:22.707 fused_ordering(287) 00:09:22.707 fused_ordering(288) 00:09:22.707 fused_ordering(289) 00:09:22.707 fused_ordering(290) 00:09:22.707 fused_ordering(291) 00:09:22.707 fused_ordering(292) 00:09:22.707 fused_ordering(293) 00:09:22.707 fused_ordering(294) 00:09:22.707 fused_ordering(295) 00:09:22.707 fused_ordering(296) 00:09:22.707 fused_ordering(297) 00:09:22.707 fused_ordering(298) 00:09:22.707 fused_ordering(299) 00:09:22.707 fused_ordering(300) 00:09:22.707 fused_ordering(301) 00:09:22.707 fused_ordering(302) 00:09:22.707 fused_ordering(303) 00:09:22.707 fused_ordering(304) 00:09:22.707 fused_ordering(305) 00:09:22.708 fused_ordering(306) 00:09:22.708 fused_ordering(307) 00:09:22.708 fused_ordering(308) 00:09:22.708 fused_ordering(309) 00:09:22.708 fused_ordering(310) 00:09:22.708 fused_ordering(311) 00:09:22.708 fused_ordering(312) 00:09:22.708 fused_ordering(313) 00:09:22.708 fused_ordering(314) 00:09:22.708 fused_ordering(315) 00:09:22.708 fused_ordering(316) 00:09:22.708 fused_ordering(317) 00:09:22.708 fused_ordering(318) 00:09:22.708 fused_ordering(319) 00:09:22.708 fused_ordering(320) 00:09:22.708 fused_ordering(321) 00:09:22.708 fused_ordering(322) 00:09:22.708 fused_ordering(323) 00:09:22.708 fused_ordering(324) 00:09:22.708 fused_ordering(325) 00:09:22.708 fused_ordering(326) 00:09:22.708 fused_ordering(327) 00:09:22.708 fused_ordering(328) 00:09:22.708 fused_ordering(329) 00:09:22.708 fused_ordering(330) 00:09:22.708 fused_ordering(331) 00:09:22.708 fused_ordering(332) 00:09:22.708 fused_ordering(333) 00:09:22.708 fused_ordering(334) 00:09:22.708 fused_ordering(335) 00:09:22.708 fused_ordering(336) 00:09:22.708 fused_ordering(337) 00:09:22.708 fused_ordering(338) 00:09:22.708 fused_ordering(339) 00:09:22.708 fused_ordering(340) 00:09:22.708 fused_ordering(341) 00:09:22.708 fused_ordering(342) 00:09:22.708 fused_ordering(343) 00:09:22.708 fused_ordering(344) 00:09:22.708 fused_ordering(345) 00:09:22.708 fused_ordering(346) 00:09:22.708 fused_ordering(347) 00:09:22.708 fused_ordering(348) 00:09:22.708 fused_ordering(349) 00:09:22.708 fused_ordering(350) 00:09:22.708 fused_ordering(351) 00:09:22.708 fused_ordering(352) 00:09:22.708 fused_ordering(353) 00:09:22.708 fused_ordering(354) 00:09:22.708 fused_ordering(355) 00:09:22.708 fused_ordering(356) 00:09:22.708 fused_ordering(357) 00:09:22.708 fused_ordering(358) 00:09:22.708 fused_ordering(359) 00:09:22.708 fused_ordering(360) 00:09:22.708 fused_ordering(361) 00:09:22.708 fused_ordering(362) 00:09:22.708 fused_ordering(363) 00:09:22.708 fused_ordering(364) 00:09:22.708 fused_ordering(365) 00:09:22.708 fused_ordering(366) 00:09:22.708 fused_ordering(367) 00:09:22.708 fused_ordering(368) 00:09:22.708 fused_ordering(369) 00:09:22.708 fused_ordering(370) 00:09:22.708 fused_ordering(371) 00:09:22.708 fused_ordering(372) 00:09:22.708 fused_ordering(373) 00:09:22.708 fused_ordering(374) 00:09:22.708 fused_ordering(375) 00:09:22.708 fused_ordering(376) 00:09:22.708 fused_ordering(377) 00:09:22.708 fused_ordering(378) 00:09:22.708 fused_ordering(379) 00:09:22.708 fused_ordering(380) 00:09:22.708 fused_ordering(381) 00:09:22.708 fused_ordering(382) 00:09:22.708 fused_ordering(383) 00:09:22.708 fused_ordering(384) 00:09:22.708 fused_ordering(385) 00:09:22.708 fused_ordering(386) 00:09:22.708 fused_ordering(387) 00:09:22.708 fused_ordering(388) 00:09:22.708 fused_ordering(389) 00:09:22.708 fused_ordering(390) 00:09:22.708 fused_ordering(391) 00:09:22.708 fused_ordering(392) 00:09:22.708 fused_ordering(393) 00:09:22.708 fused_ordering(394) 00:09:22.708 fused_ordering(395) 00:09:22.708 fused_ordering(396) 00:09:22.708 fused_ordering(397) 00:09:22.708 fused_ordering(398) 00:09:22.708 fused_ordering(399) 00:09:22.708 fused_ordering(400) 00:09:22.708 fused_ordering(401) 00:09:22.708 fused_ordering(402) 00:09:22.708 fused_ordering(403) 00:09:22.708 fused_ordering(404) 00:09:22.708 fused_ordering(405) 00:09:22.708 fused_ordering(406) 00:09:22.708 fused_ordering(407) 00:09:22.708 fused_ordering(408) 00:09:22.708 fused_ordering(409) 00:09:22.708 fused_ordering(410) 00:09:23.273 fused_ordering(411) 00:09:23.273 fused_ordering(412) 00:09:23.273 fused_ordering(413) 00:09:23.273 fused_ordering(414) 00:09:23.273 fused_ordering(415) 00:09:23.273 fused_ordering(416) 00:09:23.273 fused_ordering(417) 00:09:23.273 fused_ordering(418) 00:09:23.273 fused_ordering(419) 00:09:23.273 fused_ordering(420) 00:09:23.273 fused_ordering(421) 00:09:23.273 fused_ordering(422) 00:09:23.273 fused_ordering(423) 00:09:23.273 fused_ordering(424) 00:09:23.273 fused_ordering(425) 00:09:23.273 fused_ordering(426) 00:09:23.273 fused_ordering(427) 00:09:23.273 fused_ordering(428) 00:09:23.273 fused_ordering(429) 00:09:23.273 fused_ordering(430) 00:09:23.273 fused_ordering(431) 00:09:23.273 fused_ordering(432) 00:09:23.273 fused_ordering(433) 00:09:23.273 fused_ordering(434) 00:09:23.273 fused_ordering(435) 00:09:23.273 fused_ordering(436) 00:09:23.273 fused_ordering(437) 00:09:23.273 fused_ordering(438) 00:09:23.273 fused_ordering(439) 00:09:23.273 fused_ordering(440) 00:09:23.273 fused_ordering(441) 00:09:23.273 fused_ordering(442) 00:09:23.273 fused_ordering(443) 00:09:23.273 fused_ordering(444) 00:09:23.273 fused_ordering(445) 00:09:23.273 fused_ordering(446) 00:09:23.273 fused_ordering(447) 00:09:23.273 fused_ordering(448) 00:09:23.273 fused_ordering(449) 00:09:23.273 fused_ordering(450) 00:09:23.273 fused_ordering(451) 00:09:23.273 fused_ordering(452) 00:09:23.273 fused_ordering(453) 00:09:23.273 fused_ordering(454) 00:09:23.273 fused_ordering(455) 00:09:23.273 fused_ordering(456) 00:09:23.273 fused_ordering(457) 00:09:23.273 fused_ordering(458) 00:09:23.273 fused_ordering(459) 00:09:23.273 fused_ordering(460) 00:09:23.273 fused_ordering(461) 00:09:23.273 fused_ordering(462) 00:09:23.273 fused_ordering(463) 00:09:23.273 fused_ordering(464) 00:09:23.273 fused_ordering(465) 00:09:23.273 fused_ordering(466) 00:09:23.273 fused_ordering(467) 00:09:23.273 fused_ordering(468) 00:09:23.273 fused_ordering(469) 00:09:23.273 fused_ordering(470) 00:09:23.273 fused_ordering(471) 00:09:23.273 fused_ordering(472) 00:09:23.273 fused_ordering(473) 00:09:23.273 fused_ordering(474) 00:09:23.273 fused_ordering(475) 00:09:23.273 fused_ordering(476) 00:09:23.273 fused_ordering(477) 00:09:23.273 fused_ordering(478) 00:09:23.273 fused_ordering(479) 00:09:23.273 fused_ordering(480) 00:09:23.273 fused_ordering(481) 00:09:23.273 fused_ordering(482) 00:09:23.273 fused_ordering(483) 00:09:23.273 fused_ordering(484) 00:09:23.273 fused_ordering(485) 00:09:23.273 fused_ordering(486) 00:09:23.273 fused_ordering(487) 00:09:23.273 fused_ordering(488) 00:09:23.273 fused_ordering(489) 00:09:23.273 fused_ordering(490) 00:09:23.273 fused_ordering(491) 00:09:23.273 fused_ordering(492) 00:09:23.273 fused_ordering(493) 00:09:23.273 fused_ordering(494) 00:09:23.273 fused_ordering(495) 00:09:23.273 fused_ordering(496) 00:09:23.273 fused_ordering(497) 00:09:23.273 fused_ordering(498) 00:09:23.273 fused_ordering(499) 00:09:23.273 fused_ordering(500) 00:09:23.273 fused_ordering(501) 00:09:23.273 fused_ordering(502) 00:09:23.273 fused_ordering(503) 00:09:23.273 fused_ordering(504) 00:09:23.273 fused_ordering(505) 00:09:23.273 fused_ordering(506) 00:09:23.273 fused_ordering(507) 00:09:23.273 fused_ordering(508) 00:09:23.273 fused_ordering(509) 00:09:23.273 fused_ordering(510) 00:09:23.273 fused_ordering(511) 00:09:23.273 fused_ordering(512) 00:09:23.273 fused_ordering(513) 00:09:23.273 fused_ordering(514) 00:09:23.273 fused_ordering(515) 00:09:23.273 fused_ordering(516) 00:09:23.273 fused_ordering(517) 00:09:23.273 fused_ordering(518) 00:09:23.273 fused_ordering(519) 00:09:23.273 fused_ordering(520) 00:09:23.273 fused_ordering(521) 00:09:23.273 fused_ordering(522) 00:09:23.273 fused_ordering(523) 00:09:23.273 fused_ordering(524) 00:09:23.273 fused_ordering(525) 00:09:23.273 fused_ordering(526) 00:09:23.273 fused_ordering(527) 00:09:23.273 fused_ordering(528) 00:09:23.273 fused_ordering(529) 00:09:23.273 fused_ordering(530) 00:09:23.273 fused_ordering(531) 00:09:23.273 fused_ordering(532) 00:09:23.273 fused_ordering(533) 00:09:23.273 fused_ordering(534) 00:09:23.273 fused_ordering(535) 00:09:23.273 fused_ordering(536) 00:09:23.273 fused_ordering(537) 00:09:23.273 fused_ordering(538) 00:09:23.273 fused_ordering(539) 00:09:23.273 fused_ordering(540) 00:09:23.273 fused_ordering(541) 00:09:23.273 fused_ordering(542) 00:09:23.273 fused_ordering(543) 00:09:23.273 fused_ordering(544) 00:09:23.273 fused_ordering(545) 00:09:23.273 fused_ordering(546) 00:09:23.273 fused_ordering(547) 00:09:23.273 fused_ordering(548) 00:09:23.273 fused_ordering(549) 00:09:23.273 fused_ordering(550) 00:09:23.273 fused_ordering(551) 00:09:23.273 fused_ordering(552) 00:09:23.273 fused_ordering(553) 00:09:23.273 fused_ordering(554) 00:09:23.273 fused_ordering(555) 00:09:23.273 fused_ordering(556) 00:09:23.273 fused_ordering(557) 00:09:23.273 fused_ordering(558) 00:09:23.273 fused_ordering(559) 00:09:23.273 fused_ordering(560) 00:09:23.273 fused_ordering(561) 00:09:23.273 fused_ordering(562) 00:09:23.273 fused_ordering(563) 00:09:23.273 fused_ordering(564) 00:09:23.273 fused_ordering(565) 00:09:23.274 fused_ordering(566) 00:09:23.274 fused_ordering(567) 00:09:23.274 fused_ordering(568) 00:09:23.274 fused_ordering(569) 00:09:23.274 fused_ordering(570) 00:09:23.274 fused_ordering(571) 00:09:23.274 fused_ordering(572) 00:09:23.274 fused_ordering(573) 00:09:23.274 fused_ordering(574) 00:09:23.274 fused_ordering(575) 00:09:23.274 fused_ordering(576) 00:09:23.274 fused_ordering(577) 00:09:23.274 fused_ordering(578) 00:09:23.274 fused_ordering(579) 00:09:23.274 fused_ordering(580) 00:09:23.274 fused_ordering(581) 00:09:23.274 fused_ordering(582) 00:09:23.274 fused_ordering(583) 00:09:23.274 fused_ordering(584) 00:09:23.274 fused_ordering(585) 00:09:23.274 fused_ordering(586) 00:09:23.274 fused_ordering(587) 00:09:23.274 fused_ordering(588) 00:09:23.274 fused_ordering(589) 00:09:23.274 fused_ordering(590) 00:09:23.274 fused_ordering(591) 00:09:23.274 fused_ordering(592) 00:09:23.274 fused_ordering(593) 00:09:23.274 fused_ordering(594) 00:09:23.274 fused_ordering(595) 00:09:23.274 fused_ordering(596) 00:09:23.274 fused_ordering(597) 00:09:23.274 fused_ordering(598) 00:09:23.274 fused_ordering(599) 00:09:23.274 fused_ordering(600) 00:09:23.274 fused_ordering(601) 00:09:23.274 fused_ordering(602) 00:09:23.274 fused_ordering(603) 00:09:23.274 fused_ordering(604) 00:09:23.274 fused_ordering(605) 00:09:23.274 fused_ordering(606) 00:09:23.274 fused_ordering(607) 00:09:23.274 fused_ordering(608) 00:09:23.274 fused_ordering(609) 00:09:23.274 fused_ordering(610) 00:09:23.274 fused_ordering(611) 00:09:23.274 fused_ordering(612) 00:09:23.274 fused_ordering(613) 00:09:23.274 fused_ordering(614) 00:09:23.274 fused_ordering(615) 00:09:23.839 fused_ordering(616) 00:09:23.839 fused_ordering(617) 00:09:23.839 fused_ordering(618) 00:09:23.839 fused_ordering(619) 00:09:23.839 fused_ordering(620) 00:09:23.839 fused_ordering(621) 00:09:23.839 fused_ordering(622) 00:09:23.839 fused_ordering(623) 00:09:23.839 fused_ordering(624) 00:09:23.839 fused_ordering(625) 00:09:23.839 fused_ordering(626) 00:09:23.839 fused_ordering(627) 00:09:23.839 fused_ordering(628) 00:09:23.839 fused_ordering(629) 00:09:23.839 fused_ordering(630) 00:09:23.839 fused_ordering(631) 00:09:23.839 fused_ordering(632) 00:09:23.839 fused_ordering(633) 00:09:23.839 fused_ordering(634) 00:09:23.839 fused_ordering(635) 00:09:23.839 fused_ordering(636) 00:09:23.839 fused_ordering(637) 00:09:23.839 fused_ordering(638) 00:09:23.839 fused_ordering(639) 00:09:23.839 fused_ordering(640) 00:09:23.839 fused_ordering(641) 00:09:23.839 fused_ordering(642) 00:09:23.839 fused_ordering(643) 00:09:23.839 fused_ordering(644) 00:09:23.839 fused_ordering(645) 00:09:23.839 fused_ordering(646) 00:09:23.839 fused_ordering(647) 00:09:23.839 fused_ordering(648) 00:09:23.839 fused_ordering(649) 00:09:23.839 fused_ordering(650) 00:09:23.839 fused_ordering(651) 00:09:23.839 fused_ordering(652) 00:09:23.839 fused_ordering(653) 00:09:23.839 fused_ordering(654) 00:09:23.839 fused_ordering(655) 00:09:23.839 fused_ordering(656) 00:09:23.839 fused_ordering(657) 00:09:23.839 fused_ordering(658) 00:09:23.839 fused_ordering(659) 00:09:23.839 fused_ordering(660) 00:09:23.839 fused_ordering(661) 00:09:23.839 fused_ordering(662) 00:09:23.839 fused_ordering(663) 00:09:23.839 fused_ordering(664) 00:09:23.839 fused_ordering(665) 00:09:23.839 fused_ordering(666) 00:09:23.839 fused_ordering(667) 00:09:23.839 fused_ordering(668) 00:09:23.839 fused_ordering(669) 00:09:23.839 fused_ordering(670) 00:09:23.839 fused_ordering(671) 00:09:23.839 fused_ordering(672) 00:09:23.839 fused_ordering(673) 00:09:23.839 fused_ordering(674) 00:09:23.839 fused_ordering(675) 00:09:23.839 fused_ordering(676) 00:09:23.839 fused_ordering(677) 00:09:23.839 fused_ordering(678) 00:09:23.839 fused_ordering(679) 00:09:23.839 fused_ordering(680) 00:09:23.839 fused_ordering(681) 00:09:23.839 fused_ordering(682) 00:09:23.839 fused_ordering(683) 00:09:23.839 fused_ordering(684) 00:09:23.839 fused_ordering(685) 00:09:23.839 fused_ordering(686) 00:09:23.839 fused_ordering(687) 00:09:23.839 fused_ordering(688) 00:09:23.839 fused_ordering(689) 00:09:23.839 fused_ordering(690) 00:09:23.839 fused_ordering(691) 00:09:23.839 fused_ordering(692) 00:09:23.839 fused_ordering(693) 00:09:23.839 fused_ordering(694) 00:09:23.839 fused_ordering(695) 00:09:23.839 fused_ordering(696) 00:09:23.839 fused_ordering(697) 00:09:23.839 fused_ordering(698) 00:09:23.839 fused_ordering(699) 00:09:23.839 fused_ordering(700) 00:09:23.839 fused_ordering(701) 00:09:23.839 fused_ordering(702) 00:09:23.839 fused_ordering(703) 00:09:23.839 fused_ordering(704) 00:09:23.839 fused_ordering(705) 00:09:23.839 fused_ordering(706) 00:09:23.839 fused_ordering(707) 00:09:23.839 fused_ordering(708) 00:09:23.839 fused_ordering(709) 00:09:23.839 fused_ordering(710) 00:09:23.839 fused_ordering(711) 00:09:23.839 fused_ordering(712) 00:09:23.839 fused_ordering(713) 00:09:23.839 fused_ordering(714) 00:09:23.839 fused_ordering(715) 00:09:23.839 fused_ordering(716) 00:09:23.839 fused_ordering(717) 00:09:23.839 fused_ordering(718) 00:09:23.839 fused_ordering(719) 00:09:23.839 fused_ordering(720) 00:09:23.839 fused_ordering(721) 00:09:23.839 fused_ordering(722) 00:09:23.839 fused_ordering(723) 00:09:23.839 fused_ordering(724) 00:09:23.839 fused_ordering(725) 00:09:23.839 fused_ordering(726) 00:09:23.839 fused_ordering(727) 00:09:23.839 fused_ordering(728) 00:09:23.839 fused_ordering(729) 00:09:23.839 fused_ordering(730) 00:09:23.839 fused_ordering(731) 00:09:23.839 fused_ordering(732) 00:09:23.839 fused_ordering(733) 00:09:23.839 fused_ordering(734) 00:09:23.839 fused_ordering(735) 00:09:23.839 fused_ordering(736) 00:09:23.839 fused_ordering(737) 00:09:23.839 fused_ordering(738) 00:09:23.839 fused_ordering(739) 00:09:23.839 fused_ordering(740) 00:09:23.839 fused_ordering(741) 00:09:23.839 fused_ordering(742) 00:09:23.839 fused_ordering(743) 00:09:23.839 fused_ordering(744) 00:09:23.839 fused_ordering(745) 00:09:23.839 fused_ordering(746) 00:09:23.839 fused_ordering(747) 00:09:23.839 fused_ordering(748) 00:09:23.839 fused_ordering(749) 00:09:23.839 fused_ordering(750) 00:09:23.839 fused_ordering(751) 00:09:23.839 fused_ordering(752) 00:09:23.839 fused_ordering(753) 00:09:23.839 fused_ordering(754) 00:09:23.839 fused_ordering(755) 00:09:23.839 fused_ordering(756) 00:09:23.839 fused_ordering(757) 00:09:23.839 fused_ordering(758) 00:09:23.839 fused_ordering(759) 00:09:23.839 fused_ordering(760) 00:09:23.839 fused_ordering(761) 00:09:23.839 fused_ordering(762) 00:09:23.839 fused_ordering(763) 00:09:23.839 fused_ordering(764) 00:09:23.839 fused_ordering(765) 00:09:23.839 fused_ordering(766) 00:09:23.839 fused_ordering(767) 00:09:23.839 fused_ordering(768) 00:09:23.839 fused_ordering(769) 00:09:23.839 fused_ordering(770) 00:09:23.839 fused_ordering(771) 00:09:23.839 fused_ordering(772) 00:09:23.839 fused_ordering(773) 00:09:23.839 fused_ordering(774) 00:09:23.839 fused_ordering(775) 00:09:23.839 fused_ordering(776) 00:09:23.839 fused_ordering(777) 00:09:23.839 fused_ordering(778) 00:09:23.839 fused_ordering(779) 00:09:23.839 fused_ordering(780) 00:09:23.839 fused_ordering(781) 00:09:23.839 fused_ordering(782) 00:09:23.839 fused_ordering(783) 00:09:23.839 fused_ordering(784) 00:09:23.839 fused_ordering(785) 00:09:23.839 fused_ordering(786) 00:09:23.839 fused_ordering(787) 00:09:23.839 fused_ordering(788) 00:09:23.839 fused_ordering(789) 00:09:23.839 fused_ordering(790) 00:09:23.839 fused_ordering(791) 00:09:23.839 fused_ordering(792) 00:09:23.839 fused_ordering(793) 00:09:23.839 fused_ordering(794) 00:09:23.839 fused_ordering(795) 00:09:23.839 fused_ordering(796) 00:09:23.839 fused_ordering(797) 00:09:23.839 fused_ordering(798) 00:09:23.839 fused_ordering(799) 00:09:23.839 fused_ordering(800) 00:09:23.839 fused_ordering(801) 00:09:23.839 fused_ordering(802) 00:09:23.839 fused_ordering(803) 00:09:23.839 fused_ordering(804) 00:09:23.839 fused_ordering(805) 00:09:23.839 fused_ordering(806) 00:09:23.839 fused_ordering(807) 00:09:23.839 fused_ordering(808) 00:09:23.839 fused_ordering(809) 00:09:23.839 fused_ordering(810) 00:09:23.839 fused_ordering(811) 00:09:23.839 fused_ordering(812) 00:09:23.839 fused_ordering(813) 00:09:23.839 fused_ordering(814) 00:09:23.839 fused_ordering(815) 00:09:23.839 fused_ordering(816) 00:09:23.839 fused_ordering(817) 00:09:23.839 fused_ordering(818) 00:09:23.839 fused_ordering(819) 00:09:23.839 fused_ordering(820) 00:09:24.404 fused_ordering(821) 00:09:24.404 fused_ordering(822) 00:09:24.404 fused_ordering(823) 00:09:24.404 fused_ordering(824) 00:09:24.404 fused_ordering(825) 00:09:24.404 fused_ordering(826) 00:09:24.404 fused_ordering(827) 00:09:24.404 fused_ordering(828) 00:09:24.404 fused_ordering(829) 00:09:24.404 fused_ordering(830) 00:09:24.404 fused_ordering(831) 00:09:24.404 fused_ordering(832) 00:09:24.404 fused_ordering(833) 00:09:24.404 fused_ordering(834) 00:09:24.404 fused_ordering(835) 00:09:24.404 fused_ordering(836) 00:09:24.404 fused_ordering(837) 00:09:24.404 fused_ordering(838) 00:09:24.404 fused_ordering(839) 00:09:24.404 fused_ordering(840) 00:09:24.404 fused_ordering(841) 00:09:24.404 fused_ordering(842) 00:09:24.404 fused_ordering(843) 00:09:24.404 fused_ordering(844) 00:09:24.404 fused_ordering(845) 00:09:24.404 fused_ordering(846) 00:09:24.404 fused_ordering(847) 00:09:24.404 fused_ordering(848) 00:09:24.404 fused_ordering(849) 00:09:24.404 fused_ordering(850) 00:09:24.404 fused_ordering(851) 00:09:24.404 fused_ordering(852) 00:09:24.404 fused_ordering(853) 00:09:24.404 fused_ordering(854) 00:09:24.404 fused_ordering(855) 00:09:24.404 fused_ordering(856) 00:09:24.404 fused_ordering(857) 00:09:24.404 fused_ordering(858) 00:09:24.404 fused_ordering(859) 00:09:24.404 fused_ordering(860) 00:09:24.404 fused_ordering(861) 00:09:24.404 fused_ordering(862) 00:09:24.404 fused_ordering(863) 00:09:24.404 fused_ordering(864) 00:09:24.404 fused_ordering(865) 00:09:24.404 fused_ordering(866) 00:09:24.404 fused_ordering(867) 00:09:24.404 fused_ordering(868) 00:09:24.404 fused_ordering(869) 00:09:24.404 fused_ordering(870) 00:09:24.404 fused_ordering(871) 00:09:24.404 fused_ordering(872) 00:09:24.404 fused_ordering(873) 00:09:24.404 fused_ordering(874) 00:09:24.404 fused_ordering(875) 00:09:24.404 fused_ordering(876) 00:09:24.404 fused_ordering(877) 00:09:24.404 fused_ordering(878) 00:09:24.404 fused_ordering(879) 00:09:24.404 fused_ordering(880) 00:09:24.404 fused_ordering(881) 00:09:24.404 fused_ordering(882) 00:09:24.404 fused_ordering(883) 00:09:24.404 fused_ordering(884) 00:09:24.404 fused_ordering(885) 00:09:24.404 fused_ordering(886) 00:09:24.404 fused_ordering(887) 00:09:24.404 fused_ordering(888) 00:09:24.404 fused_ordering(889) 00:09:24.404 fused_ordering(890) 00:09:24.404 fused_ordering(891) 00:09:24.404 fused_ordering(892) 00:09:24.404 fused_ordering(893) 00:09:24.404 fused_ordering(894) 00:09:24.404 fused_ordering(895) 00:09:24.404 fused_ordering(896) 00:09:24.404 fused_ordering(897) 00:09:24.404 fused_ordering(898) 00:09:24.404 fused_ordering(899) 00:09:24.404 fused_ordering(900) 00:09:24.404 fused_ordering(901) 00:09:24.404 fused_ordering(902) 00:09:24.404 fused_ordering(903) 00:09:24.404 fused_ordering(904) 00:09:24.404 fused_ordering(905) 00:09:24.404 fused_ordering(906) 00:09:24.404 fused_ordering(907) 00:09:24.404 fused_ordering(908) 00:09:24.404 fused_ordering(909) 00:09:24.404 fused_ordering(910) 00:09:24.404 fused_ordering(911) 00:09:24.404 fused_ordering(912) 00:09:24.404 fused_ordering(913) 00:09:24.404 fused_ordering(914) 00:09:24.405 fused_ordering(915) 00:09:24.405 fused_ordering(916) 00:09:24.405 fused_ordering(917) 00:09:24.405 fused_ordering(918) 00:09:24.405 fused_ordering(919) 00:09:24.405 fused_ordering(920) 00:09:24.405 fused_ordering(921) 00:09:24.405 fused_ordering(922) 00:09:24.405 fused_ordering(923) 00:09:24.405 fused_ordering(924) 00:09:24.405 fused_ordering(925) 00:09:24.405 fused_ordering(926) 00:09:24.405 fused_ordering(927) 00:09:24.405 fused_ordering(928) 00:09:24.405 fused_ordering(929) 00:09:24.405 fused_ordering(930) 00:09:24.405 fused_ordering(931) 00:09:24.405 fused_ordering(932) 00:09:24.405 fused_ordering(933) 00:09:24.405 fused_ordering(934) 00:09:24.405 fused_ordering(935) 00:09:24.405 fused_ordering(936) 00:09:24.405 fused_ordering(937) 00:09:24.405 fused_ordering(938) 00:09:24.405 fused_ordering(939) 00:09:24.405 fused_ordering(940) 00:09:24.405 fused_ordering(941) 00:09:24.405 fused_ordering(942) 00:09:24.405 fused_ordering(943) 00:09:24.405 fused_ordering(944) 00:09:24.405 fused_ordering(945) 00:09:24.405 fused_ordering(946) 00:09:24.405 fused_ordering(947) 00:09:24.405 fused_ordering(948) 00:09:24.405 fused_ordering(949) 00:09:24.405 fused_ordering(950) 00:09:24.405 fused_ordering(951) 00:09:24.405 fused_ordering(952) 00:09:24.405 fused_ordering(953) 00:09:24.405 fused_ordering(954) 00:09:24.405 fused_ordering(955) 00:09:24.405 fused_ordering(956) 00:09:24.405 fused_ordering(957) 00:09:24.405 fused_ordering(958) 00:09:24.405 fused_ordering(959) 00:09:24.405 fused_ordering(960) 00:09:24.405 fused_ordering(961) 00:09:24.405 fused_ordering(962) 00:09:24.405 fused_ordering(963) 00:09:24.405 fused_ordering(964) 00:09:24.405 fused_ordering(965) 00:09:24.405 fused_ordering(966) 00:09:24.405 fused_ordering(967) 00:09:24.405 fused_ordering(968) 00:09:24.405 fused_ordering(969) 00:09:24.405 fused_ordering(970) 00:09:24.405 fused_ordering(971) 00:09:24.405 fused_ordering(972) 00:09:24.405 fused_ordering(973) 00:09:24.405 fused_ordering(974) 00:09:24.405 fused_ordering(975) 00:09:24.405 fused_ordering(976) 00:09:24.405 fused_ordering(977) 00:09:24.405 fused_ordering(978) 00:09:24.405 fused_ordering(979) 00:09:24.405 fused_ordering(980) 00:09:24.405 fused_ordering(981) 00:09:24.405 fused_ordering(982) 00:09:24.405 fused_ordering(983) 00:09:24.405 fused_ordering(984) 00:09:24.405 fused_ordering(985) 00:09:24.405 fused_ordering(986) 00:09:24.405 fused_ordering(987) 00:09:24.405 fused_ordering(988) 00:09:24.405 fused_ordering(989) 00:09:24.405 fused_ordering(990) 00:09:24.405 fused_ordering(991) 00:09:24.405 fused_ordering(992) 00:09:24.405 fused_ordering(993) 00:09:24.405 fused_ordering(994) 00:09:24.405 fused_ordering(995) 00:09:24.405 fused_ordering(996) 00:09:24.405 fused_ordering(997) 00:09:24.405 fused_ordering(998) 00:09:24.405 fused_ordering(999) 00:09:24.405 fused_ordering(1000) 00:09:24.405 fused_ordering(1001) 00:09:24.405 fused_ordering(1002) 00:09:24.405 fused_ordering(1003) 00:09:24.405 fused_ordering(1004) 00:09:24.405 fused_ordering(1005) 00:09:24.405 fused_ordering(1006) 00:09:24.405 fused_ordering(1007) 00:09:24.405 fused_ordering(1008) 00:09:24.405 fused_ordering(1009) 00:09:24.405 fused_ordering(1010) 00:09:24.405 fused_ordering(1011) 00:09:24.405 fused_ordering(1012) 00:09:24.405 fused_ordering(1013) 00:09:24.405 fused_ordering(1014) 00:09:24.405 fused_ordering(1015) 00:09:24.405 fused_ordering(1016) 00:09:24.405 fused_ordering(1017) 00:09:24.405 fused_ordering(1018) 00:09:24.405 fused_ordering(1019) 00:09:24.405 fused_ordering(1020) 00:09:24.405 fused_ordering(1021) 00:09:24.405 fused_ordering(1022) 00:09:24.405 fused_ordering(1023) 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.405 rmmod nvme_tcp 00:09:24.405 rmmod nvme_fabrics 00:09:24.405 rmmod nvme_keyring 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3716709 ']' 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3716709 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3716709 ']' 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3716709 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3716709 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3716709' 00:09:24.405 killing process with pid 3716709 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3716709 00:09:24.405 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3716709 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.665 23:35:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.198 23:36:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.198 00:09:27.198 real 0m7.688s 00:09:27.198 user 0m4.778s 00:09:27.198 sys 0m3.544s 00:09:27.198 23:36:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.198 23:36:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:27.198 ************************************ 00:09:27.198 END TEST nvmf_fused_ordering 00:09:27.198 ************************************ 00:09:27.198 23:36:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.198 23:36:01 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:27.198 23:36:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.198 23:36:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.198 23:36:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.198 ************************************ 00:09:27.198 START TEST nvmf_delete_subsystem 00:09:27.198 ************************************ 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:27.198 * Looking for test storage... 00:09:27.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.198 23:36:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:29.099 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:29.099 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:29.099 Found net devices under 0000:09:00.0: cvl_0_0 00:09:29.099 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:29.100 Found net devices under 0000:09:00.1: cvl_0_1 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.100 23:36:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:09:29.100 00:09:29.100 --- 10.0.0.2 ping statistics --- 00:09:29.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.100 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:29.100 00:09:29.100 --- 10.0.0.1 ping statistics --- 00:09:29.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.100 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3719166 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3719166 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3719166 ']' 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.100 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.100 [2024-07-15 23:36:04.117857] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:29.100 [2024-07-15 23:36:04.117936] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.100 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.100 [2024-07-15 23:36:04.190428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.388 [2024-07-15 23:36:04.299625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.388 [2024-07-15 23:36:04.299682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.388 [2024-07-15 23:36:04.299710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.388 [2024-07-15 23:36:04.299724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.388 [2024-07-15 23:36:04.299733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.388 [2024-07-15 23:36:04.299813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.388 [2024-07-15 23:36:04.299818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 [2024-07-15 23:36:04.446724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 [2024-07-15 23:36:04.462907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 NULL1 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 Delay0 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.388 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.646 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.646 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3719194 00:09:29.646 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:29.646 23:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:29.646 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.646 [2024-07-15 23:36:04.537630] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:31.553 23:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.554 23:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.554 23:36:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 [2024-07-15 23:36:06.620302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a4000c00 is same with the state(5) to be set 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 starting I/O failed: -6 00:09:31.554 Read completed with error (sct=0, sc=8) 00:09:31.554 Write completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Write completed with error (sct=0, sc=8) 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:31.555 Read completed with error (sct=0, sc=8) 00:09:31.555 starting I/O failed: -6 00:09:32.487 [2024-07-15 23:36:07.597380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fac0 is same with the state(5) to be set 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 [2024-07-15 23:36:07.622674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e5c0 is same with the state(5) to be set 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 [2024-07-15 23:36:07.622917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a400d600 is same with the state(5) to be set 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Write completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.745 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 [2024-07-15 23:36:07.623195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e980 is same with the state(5) to be set 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Write completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 Read completed with error (sct=0, sc=8) 00:09:32.746 [2024-07-15 23:36:07.623360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a400cfe0 is same with the state(5) to be set 00:09:32.746 Initializing NVMe Controllers 00:09:32.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.746 Controller IO queue size 128, less than required. 00:09:32.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:32.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:32.746 Initialization complete. Launching workers. 00:09:32.746 ======================================================== 00:09:32.746 Latency(us) 00:09:32.746 Device Information : IOPS MiB/s Average min max 00:09:32.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.05 0.09 910331.28 714.64 2003756.05 00:09:32.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.73 0.08 945590.21 715.85 2003939.21 00:09:32.746 ======================================================== 00:09:32.746 Total : 351.78 0.17 926742.35 714.64 2003939.21 00:09:32.746 00:09:32.746 [2024-07-15 23:36:07.624448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7fac0 (9): Bad file descriptor 00:09:32.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:32.746 23:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.746 23:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:32.746 23:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3719194 00:09:32.746 23:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3719194 00:09:33.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3719194) - No such process 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3719194 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3719194 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3719194 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.309 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.310 [2024-07-15 23:36:08.147049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3720036 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:33.310 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.310 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.310 [2024-07-15 23:36:08.210543] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:33.566 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.566 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:33.566 23:36:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.130 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.130 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:34.130 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.693 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.693 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:34.693 23:36:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.257 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.257 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:35.257 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.821 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.821 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:35.821 23:36:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.079 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.079 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:36.079 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.336 Initializing NVMe Controllers 00:09:36.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.336 Controller IO queue size 128, less than required. 00:09:36.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:36.336 Initialization complete. Launching workers. 00:09:36.336 ======================================================== 00:09:36.336 Latency(us) 00:09:36.336 Device Information : IOPS MiB/s Average min max 00:09:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004484.57 1000203.94 1011136.26 00:09:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004804.23 1000158.24 1042291.38 00:09:36.336 ======================================================== 00:09:36.336 Total : 256.00 0.12 1004644.40 1000158.24 1042291.38 00:09:36.336 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3720036 00:09:36.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3720036) - No such process 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3720036 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.594 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.594 rmmod nvme_tcp 00:09:36.594 rmmod nvme_fabrics 00:09:36.853 rmmod nvme_keyring 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3719166 ']' 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3719166 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3719166 ']' 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3719166 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3719166 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3719166' 00:09:36.853 killing process with pid 3719166 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3719166 00:09:36.853 23:36:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3719166 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.113 23:36:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.018 23:36:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.018 00:09:39.018 real 0m12.307s 00:09:39.018 user 0m27.589s 00:09:39.018 sys 0m3.037s 00:09:39.018 23:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.018 23:36:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.018 ************************************ 00:09:39.018 END TEST nvmf_delete_subsystem 00:09:39.018 ************************************ 00:09:39.018 23:36:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:39.018 23:36:14 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:39.018 23:36:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:39.018 23:36:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.018 23:36:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.018 ************************************ 00:09:39.018 START TEST nvmf_ns_masking 00:09:39.018 ************************************ 00:09:39.018 23:36:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:39.277 * Looking for test storage... 00:09:39.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.277 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2b1928e3-a140-4374-af92-8f6fcc53976a 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d8b49b5c-c06e-4e39-9fa7-9a649fb4bd96 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f5c2e64d-3b0e-4090-970e-abb2e26cd705 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.278 23:36:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.806 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:41.807 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:41.807 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:41.807 Found net devices under 0000:09:00.0: cvl_0_0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:41.807 Found net devices under 0000:09:00.1: cvl_0_1 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:09:41.807 00:09:41.807 --- 10.0.0.2 ping statistics --- 00:09:41.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.807 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:09:41.807 00:09:41.807 --- 10.0.0.1 ping statistics --- 00:09:41.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.807 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3722570 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3722570 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3722570 ']' 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.807 [2024-07-15 23:36:16.545078] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:41.807 [2024-07-15 23:36:16.545160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.807 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.807 [2024-07-15 23:36:16.607130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.807 [2024-07-15 23:36:16.715519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.807 [2024-07-15 23:36:16.715576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.807 [2024-07-15 23:36:16.715604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.807 [2024-07-15 23:36:16.715615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.807 [2024-07-15 23:36:16.715624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.807 [2024-07-15 23:36:16.715655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.807 23:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.065 [2024-07-15 23:36:17.094427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.065 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:42.065 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:42.065 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:42.323 Malloc1 00:09:42.323 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:42.581 Malloc2 00:09:42.581 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.839 23:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:43.403 23:36:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.403 [2024-07-15 23:36:18.462745] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.403 23:36:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:43.403 23:36:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5c2e64d-3b0e-4090-970e-abb2e26cd705 -a 10.0.0.2 -s 4420 -i 4 00:09:43.661 23:36:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:43.661 23:36:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:43.661 23:36:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.661 23:36:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:43.661 23:36:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:45.584 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:45.842 [ 0]:0x1 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc6822b09894443fa9046c20106ec531 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc6822b09894443fa9046c20106ec531 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:45.842 23:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:46.099 [ 0]:0x1 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc6822b09894443fa9046c20106ec531 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc6822b09894443fa9046c20106ec531 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:46.099 [ 1]:0x2 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:46.099 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.356 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.614 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:46.872 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:46.872 23:36:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5c2e64d-3b0e-4090-970e-abb2e26cd705 -a 10.0.0.2 -s 4420 -i 4 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:47.129 23:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:49.063 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.320 [ 0]:0x2 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.320 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.578 [ 0]:0x1 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc6822b09894443fa9046c20106ec531 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc6822b09894443fa9046c20106ec531 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.578 [ 1]:0x2 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.578 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:49.834 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:49.834 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:49.834 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:49.834 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:49.835 [ 0]:0x2 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:49.835 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:50.092 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:50.092 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:50.092 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:50.092 23:36:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.092 23:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f5c2e64d-3b0e-4090-970e-abb2e26cd705 -a 10.0.0.2 -s 4420 -i 4 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:50.362 23:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:52.889 [ 0]:0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc6822b09894443fa9046c20106ec531 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc6822b09894443fa9046c20106ec531 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:52.889 [ 1]:0x2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:52.889 [ 0]:0x2 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:52.889 23:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:52.889 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:53.147 [2024-07-15 23:36:28.235651] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:53.147 request: 00:09:53.147 { 00:09:53.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.147 "nsid": 2, 00:09:53.147 "host": "nqn.2016-06.io.spdk:host1", 00:09:53.147 "method": "nvmf_ns_remove_host", 00:09:53.147 "req_id": 1 00:09:53.147 } 00:09:53.147 Got JSON-RPC error response 00:09:53.147 response: 00:09:53.147 { 00:09:53.147 "code": -32602, 00:09:53.147 "message": "Invalid parameters" 00:09:53.147 } 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.147 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:53.420 [ 0]:0x2 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cb3a90e20482452ca0f6342499d36baf 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cb3a90e20482452ca0f6342499d36baf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:53.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3724140 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3724140 /var/tmp/host.sock 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3724140 ']' 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:53.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.420 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:53.420 [2024-07-15 23:36:28.468011] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:09:53.420 [2024-07-15 23:36:28.468100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724140 ] 00:09:53.420 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.420 [2024-07-15 23:36:28.527253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.678 [2024-07-15 23:36:28.636817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.935 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.935 23:36:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:53.935 23:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.191 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.447 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2b1928e3-a140-4374-af92-8f6fcc53976a 00:09:54.447 23:36:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:54.447 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2B1928E3A1404374AF928F6FCC53976A -i 00:09:54.710 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d8b49b5c-c06e-4e39-9fa7-9a649fb4bd96 00:09:54.710 23:36:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:54.710 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D8B49B5CC06E4E399FA79A649FB4BD96 -i 00:09:54.968 23:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:55.225 23:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:55.482 23:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:55.483 23:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:56.047 nvme0n1 00:09:56.047 23:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:56.047 23:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:56.305 nvme1n2 00:09:56.305 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:56.305 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:56.305 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:56.305 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:56.305 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:56.563 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:56.563 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:56.563 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:56.563 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:56.820 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2b1928e3-a140-4374-af92-8f6fcc53976a == \2\b\1\9\2\8\e\3\-\a\1\4\0\-\4\3\7\4\-\a\f\9\2\-\8\f\6\f\c\c\5\3\9\7\6\a ]] 00:09:56.820 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:56.820 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:56.820 23:36:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d8b49b5c-c06e-4e39-9fa7-9a649fb4bd96 == \d\8\b\4\9\b\5\c\-\c\0\6\e\-\4\e\3\9\-\9\f\a\7\-\9\a\6\4\9\f\b\4\b\d\9\6 ]] 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3724140 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3724140 ']' 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3724140 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724140 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724140' 00:09:57.078 killing process with pid 3724140 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3724140 00:09:57.078 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3724140 00:09:57.644 23:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.902 rmmod nvme_tcp 00:09:57.902 rmmod nvme_fabrics 00:09:57.902 rmmod nvme_keyring 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3722570 ']' 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3722570 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3722570 ']' 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3722570 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722570 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722570' 00:09:57.902 killing process with pid 3722570 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3722570 00:09:57.902 23:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3722570 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.162 23:36:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.695 23:36:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.695 00:10:00.695 real 0m21.088s 00:10:00.695 user 0m27.326s 00:10:00.695 sys 0m4.167s 00:10:00.695 23:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.695 23:36:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:00.695 ************************************ 00:10:00.695 END TEST nvmf_ns_masking 00:10:00.695 ************************************ 00:10:00.695 23:36:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:00.695 23:36:35 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:00.695 23:36:35 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:00.695 23:36:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.695 23:36:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.695 23:36:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.695 ************************************ 00:10:00.695 START TEST nvmf_nvme_cli 00:10:00.695 ************************************ 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:00.695 * Looking for test storage... 00:10:00.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.695 23:36:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.696 23:36:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:02.600 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:02.600 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:02.600 Found net devices under 0000:09:00.0: cvl_0_0 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.600 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:02.601 Found net devices under 0000:09:00.1: cvl_0_1 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:10:02.601 00:10:02.601 --- 10.0.0.2 ping statistics --- 00:10:02.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.601 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:10:02.601 00:10:02.601 --- 10.0.0.1 ping statistics --- 00:10:02.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.601 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3726686 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3726686 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3726686 ']' 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.601 23:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:02.601 [2024-07-15 23:36:37.632117] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:10:02.601 [2024-07-15 23:36:37.632198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.601 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.601 [2024-07-15 23:36:37.697434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.859 [2024-07-15 23:36:37.807839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.859 [2024-07-15 23:36:37.807888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.859 [2024-07-15 23:36:37.807917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.859 [2024-07-15 23:36:37.807929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.859 [2024-07-15 23:36:37.807948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.859 [2024-07-15 23:36:37.808024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.859 [2024-07-15 23:36:37.808082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.859 [2024-07-15 23:36:37.811975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.859 [2024-07-15 23:36:37.811987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.793 [2024-07-15 23:36:38.648178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.793 Malloc0 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.793 Malloc1 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.793 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.794 [2024-07-15 23:36:38.732979] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:03.794 00:10:03.794 Discovery Log Number of Records 2, Generation counter 2 00:10:03.794 =====Discovery Log Entry 0====== 00:10:03.794 trtype: tcp 00:10:03.794 adrfam: ipv4 00:10:03.794 subtype: current discovery subsystem 00:10:03.794 treq: not required 00:10:03.794 portid: 0 00:10:03.794 trsvcid: 4420 00:10:03.794 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:03.794 traddr: 10.0.0.2 00:10:03.794 eflags: explicit discovery connections, duplicate discovery information 00:10:03.794 sectype: none 00:10:03.794 =====Discovery Log Entry 1====== 00:10:03.794 trtype: tcp 00:10:03.794 adrfam: ipv4 00:10:03.794 subtype: nvme subsystem 00:10:03.794 treq: not required 00:10:03.794 portid: 0 00:10:03.794 trsvcid: 4420 00:10:03.794 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:03.794 traddr: 10.0.0.2 00:10:03.794 eflags: none 00:10:03.794 sectype: none 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:03.794 23:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:04.361 23:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:06.887 /dev/nvme0n1 ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:06.887 23:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.146 rmmod nvme_tcp 00:10:07.146 rmmod nvme_fabrics 00:10:07.146 rmmod nvme_keyring 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3726686 ']' 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3726686 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3726686 ']' 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3726686 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3726686 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3726686' 00:10:07.146 killing process with pid 3726686 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3726686 00:10:07.146 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3726686 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.406 23:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.942 23:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.942 00:10:09.942 real 0m9.222s 00:10:09.942 user 0m18.985s 00:10:09.942 sys 0m2.356s 00:10:09.942 23:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.942 23:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:09.942 ************************************ 00:10:09.942 END TEST nvmf_nvme_cli 00:10:09.942 ************************************ 00:10:09.942 23:36:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.942 23:36:44 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:09.942 23:36:44 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:09.942 23:36:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.942 23:36:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.942 23:36:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.942 ************************************ 00:10:09.942 START TEST nvmf_vfio_user 00:10:09.942 ************************************ 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:09.942 * Looking for test storage... 00:10:09.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.942 23:36:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3727632 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3727632' 00:10:09.943 Process pid: 3727632 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3727632 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3727632 ']' 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:09.943 [2024-07-15 23:36:44.643929] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:10:09.943 [2024-07-15 23:36:44.644060] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.943 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.943 [2024-07-15 23:36:44.700797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.943 [2024-07-15 23:36:44.807648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.943 [2024-07-15 23:36:44.807710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.943 [2024-07-15 23:36:44.807729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.943 [2024-07-15 23:36:44.807739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.943 [2024-07-15 23:36:44.807762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.943 [2024-07-15 23:36:44.807857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.943 [2024-07-15 23:36:44.807922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.943 [2024-07-15 23:36:44.807995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.943 [2024-07-15 23:36:44.807999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:09.943 23:36:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:10.875 23:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:11.133 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:11.133 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:11.133 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:11.133 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:11.133 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:11.391 Malloc1 00:10:11.391 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:11.648 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:11.910 23:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:12.167 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:12.167 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:12.167 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:12.425 Malloc2 00:10:12.425 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:12.682 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:12.939 23:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:13.206 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:13.206 [2024-07-15 23:36:48.250979] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:10:13.206 [2024-07-15 23:36:48.251038] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728053 ] 00:10:13.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.206 [2024-07-15 23:36:48.285235] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:13.206 [2024-07-15 23:36:48.293459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.206 [2024-07-15 23:36:48.293488] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd7fd666000 00:10:13.206 [2024-07-15 23:36:48.294451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.295446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.296450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.297457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.298459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.299464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.300473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.301477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.206 [2024-07-15 23:36:48.302493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.206 [2024-07-15 23:36:48.302514] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd7fd65b000 00:10:13.206 [2024-07-15 23:36:48.303667] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.206 [2024-07-15 23:36:48.319647] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:13.206 [2024-07-15 23:36:48.319689] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:13.206 [2024-07-15 23:36:48.324624] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:13.206 [2024-07-15 23:36:48.324678] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:13.206 [2024-07-15 23:36:48.324767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:13.206 [2024-07-15 23:36:48.324795] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:13.206 [2024-07-15 23:36:48.324805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:13.206 [2024-07-15 23:36:48.325621] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:13.206 [2024-07-15 23:36:48.325642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:13.206 [2024-07-15 23:36:48.325661] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:13.206 [2024-07-15 23:36:48.326632] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:13.206 [2024-07-15 23:36:48.326651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:13.206 [2024-07-15 23:36:48.326665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.327631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:13.463 [2024-07-15 23:36:48.327648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.328635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:13.463 [2024-07-15 23:36:48.328654] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:13.463 [2024-07-15 23:36:48.328663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.328674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.328784] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:13.463 [2024-07-15 23:36:48.328791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.328799] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:13.463 [2024-07-15 23:36:48.329644] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:13.463 [2024-07-15 23:36:48.330646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:13.463 [2024-07-15 23:36:48.331657] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:13.463 [2024-07-15 23:36:48.332649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:13.463 [2024-07-15 23:36:48.332746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:13.463 [2024-07-15 23:36:48.333663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:13.464 [2024-07-15 23:36:48.333681] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:13.464 [2024-07-15 23:36:48.333690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.333713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:13.464 [2024-07-15 23:36:48.333732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.333757] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.464 [2024-07-15 23:36:48.333770] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.464 [2024-07-15 23:36:48.333788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.333847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.333862] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:13.464 [2024-07-15 23:36:48.333874] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:13.464 [2024-07-15 23:36:48.333882] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:13.464 [2024-07-15 23:36:48.333889] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:13.464 [2024-07-15 23:36:48.333896] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:13.464 [2024-07-15 23:36:48.333904] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:13.464 [2024-07-15 23:36:48.333911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.333923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.333954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.333983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.464 [2024-07-15 23:36:48.334023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.464 [2024-07-15 23:36:48.334035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.464 [2024-07-15 23:36:48.334047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.464 [2024-07-15 23:36:48.334055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334111] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:13.464 [2024-07-15 23:36:48.334119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:13.464 [2024-07-15 23:36:48.334290] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:13.464 [2024-07-15 23:36:48.334299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334329] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:13.464 [2024-07-15 23:36:48.334344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.464 [2024-07-15 23:36:48.334377] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.464 [2024-07-15 23:36:48.334386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334454] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.464 [2024-07-15 23:36:48.334462] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.464 [2024-07-15 23:36:48.334471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334560] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:13.464 [2024-07-15 23:36:48.334567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:13.464 [2024-07-15 23:36:48.334575] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:13.464 [2024-07-15 23:36:48.334600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:13.464 [2024-07-15 23:36:48.334690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.464 [2024-07-15 23:36:48.334704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:13.465 [2024-07-15 23:36:48.334726] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:13.465 [2024-07-15 23:36:48.334736] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:13.465 [2024-07-15 23:36:48.334742] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:13.465 [2024-07-15 23:36:48.334748] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:13.465 [2024-07-15 23:36:48.334757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:13.465 [2024-07-15 23:36:48.334768] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:13.465 [2024-07-15 23:36:48.334776] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:13.465 [2024-07-15 23:36:48.334785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:13.465 [2024-07-15 23:36:48.334795] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:13.465 [2024-07-15 23:36:48.334803] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.465 [2024-07-15 23:36:48.334811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.465 [2024-07-15 23:36:48.334823] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:13.465 [2024-07-15 23:36:48.334830] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:13.465 [2024-07-15 23:36:48.334839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:13.465 [2024-07-15 23:36:48.334850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:13.465 [2024-07-15 23:36:48.334872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:13.465 [2024-07-15 23:36:48.334890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:13.465 [2024-07-15 23:36:48.334902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:13.465 ===================================================== 00:10:13.465 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:13.465 ===================================================== 00:10:13.465 Controller Capabilities/Features 00:10:13.465 ================================ 00:10:13.465 Vendor ID: 4e58 00:10:13.465 Subsystem Vendor ID: 4e58 00:10:13.465 Serial Number: SPDK1 00:10:13.465 Model Number: SPDK bdev Controller 00:10:13.465 Firmware Version: 24.09 00:10:13.465 Recommended Arb Burst: 6 00:10:13.465 IEEE OUI Identifier: 8d 6b 50 00:10:13.465 Multi-path I/O 00:10:13.465 May have multiple subsystem ports: Yes 00:10:13.465 May have multiple controllers: Yes 00:10:13.465 Associated with SR-IOV VF: No 00:10:13.465 Max Data Transfer Size: 131072 00:10:13.465 Max Number of Namespaces: 32 00:10:13.465 Max Number of I/O Queues: 127 00:10:13.465 NVMe Specification Version (VS): 1.3 00:10:13.465 NVMe Specification Version (Identify): 1.3 00:10:13.465 Maximum Queue Entries: 256 00:10:13.465 Contiguous Queues Required: Yes 00:10:13.465 Arbitration Mechanisms Supported 00:10:13.465 Weighted Round Robin: Not Supported 00:10:13.465 Vendor Specific: Not Supported 00:10:13.465 Reset Timeout: 15000 ms 00:10:13.465 Doorbell Stride: 4 bytes 00:10:13.465 NVM Subsystem Reset: Not Supported 00:10:13.465 Command Sets Supported 00:10:13.465 NVM Command Set: Supported 00:10:13.465 Boot Partition: Not Supported 00:10:13.465 Memory Page Size Minimum: 4096 bytes 00:10:13.465 Memory Page Size Maximum: 4096 bytes 00:10:13.465 Persistent Memory Region: Not Supported 00:10:13.465 Optional Asynchronous Events Supported 00:10:13.465 Namespace Attribute Notices: Supported 00:10:13.465 Firmware Activation Notices: Not Supported 00:10:13.465 ANA Change Notices: Not Supported 00:10:13.465 PLE Aggregate Log Change Notices: Not Supported 00:10:13.465 LBA Status Info Alert Notices: Not Supported 00:10:13.465 EGE Aggregate Log Change Notices: Not Supported 00:10:13.465 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.465 Zone Descriptor Change Notices: Not Supported 00:10:13.465 Discovery Log Change Notices: Not Supported 00:10:13.465 Controller Attributes 00:10:13.465 128-bit Host Identifier: Supported 00:10:13.465 Non-Operational Permissive Mode: Not Supported 00:10:13.465 NVM Sets: Not Supported 00:10:13.465 Read Recovery Levels: Not Supported 00:10:13.465 Endurance Groups: Not Supported 00:10:13.465 Predictable Latency Mode: Not Supported 00:10:13.465 Traffic Based Keep ALive: Not Supported 00:10:13.465 Namespace Granularity: Not Supported 00:10:13.465 SQ Associations: Not Supported 00:10:13.465 UUID List: Not Supported 00:10:13.465 Multi-Domain Subsystem: Not Supported 00:10:13.465 Fixed Capacity Management: Not Supported 00:10:13.465 Variable Capacity Management: Not Supported 00:10:13.465 Delete Endurance Group: Not Supported 00:10:13.465 Delete NVM Set: Not Supported 00:10:13.465 Extended LBA Formats Supported: Not Supported 00:10:13.465 Flexible Data Placement Supported: Not Supported 00:10:13.465 00:10:13.465 Controller Memory Buffer Support 00:10:13.465 ================================ 00:10:13.465 Supported: No 00:10:13.465 00:10:13.465 Persistent Memory Region Support 00:10:13.465 ================================ 00:10:13.465 Supported: No 00:10:13.465 00:10:13.465 Admin Command Set Attributes 00:10:13.465 ============================ 00:10:13.465 Security Send/Receive: Not Supported 00:10:13.465 Format NVM: Not Supported 00:10:13.465 Firmware Activate/Download: Not Supported 00:10:13.465 Namespace Management: Not Supported 00:10:13.465 Device Self-Test: Not Supported 00:10:13.465 Directives: Not Supported 00:10:13.465 NVMe-MI: Not Supported 00:10:13.465 Virtualization Management: Not Supported 00:10:13.465 Doorbell Buffer Config: Not Supported 00:10:13.465 Get LBA Status Capability: Not Supported 00:10:13.465 Command & Feature Lockdown Capability: Not Supported 00:10:13.465 Abort Command Limit: 4 00:10:13.465 Async Event Request Limit: 4 00:10:13.465 Number of Firmware Slots: N/A 00:10:13.465 Firmware Slot 1 Read-Only: N/A 00:10:13.465 Firmware Activation Without Reset: N/A 00:10:13.465 Multiple Update Detection Support: N/A 00:10:13.465 Firmware Update Granularity: No Information Provided 00:10:13.465 Per-Namespace SMART Log: No 00:10:13.465 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.465 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:13.465 Command Effects Log Page: Supported 00:10:13.465 Get Log Page Extended Data: Supported 00:10:13.465 Telemetry Log Pages: Not Supported 00:10:13.465 Persistent Event Log Pages: Not Supported 00:10:13.465 Supported Log Pages Log Page: May Support 00:10:13.465 Commands Supported & Effects Log Page: Not Supported 00:10:13.465 Feature Identifiers & Effects Log Page:May Support 00:10:13.465 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.465 Data Area 4 for Telemetry Log: Not Supported 00:10:13.465 Error Log Page Entries Supported: 128 00:10:13.465 Keep Alive: Supported 00:10:13.465 Keep Alive Granularity: 10000 ms 00:10:13.465 00:10:13.465 NVM Command Set Attributes 00:10:13.465 ========================== 00:10:13.465 Submission Queue Entry Size 00:10:13.465 Max: 64 00:10:13.465 Min: 64 00:10:13.465 Completion Queue Entry Size 00:10:13.465 Max: 16 00:10:13.465 Min: 16 00:10:13.465 Number of Namespaces: 32 00:10:13.465 Compare Command: Supported 00:10:13.465 Write Uncorrectable Command: Not Supported 00:10:13.465 Dataset Management Command: Supported 00:10:13.465 Write Zeroes Command: Supported 00:10:13.465 Set Features Save Field: Not Supported 00:10:13.465 Reservations: Not Supported 00:10:13.465 Timestamp: Not Supported 00:10:13.465 Copy: Supported 00:10:13.465 Volatile Write Cache: Present 00:10:13.465 Atomic Write Unit (Normal): 1 00:10:13.465 Atomic Write Unit (PFail): 1 00:10:13.465 Atomic Compare & Write Unit: 1 00:10:13.465 Fused Compare & Write: Supported 00:10:13.465 Scatter-Gather List 00:10:13.465 SGL Command Set: Supported (Dword aligned) 00:10:13.465 SGL Keyed: Not Supported 00:10:13.465 SGL Bit Bucket Descriptor: Not Supported 00:10:13.465 SGL Metadata Pointer: Not Supported 00:10:13.465 Oversized SGL: Not Supported 00:10:13.465 SGL Metadata Address: Not Supported 00:10:13.465 SGL Offset: Not Supported 00:10:13.465 Transport SGL Data Block: Not Supported 00:10:13.465 Replay Protected Memory Block: Not Supported 00:10:13.465 00:10:13.465 Firmware Slot Information 00:10:13.465 ========================= 00:10:13.465 Active slot: 1 00:10:13.465 Slot 1 Firmware Revision: 24.09 00:10:13.465 00:10:13.465 00:10:13.465 Commands Supported and Effects 00:10:13.465 ============================== 00:10:13.465 Admin Commands 00:10:13.465 -------------- 00:10:13.465 Get Log Page (02h): Supported 00:10:13.465 Identify (06h): Supported 00:10:13.465 Abort (08h): Supported 00:10:13.466 Set Features (09h): Supported 00:10:13.466 Get Features (0Ah): Supported 00:10:13.466 Asynchronous Event Request (0Ch): Supported 00:10:13.466 Keep Alive (18h): Supported 00:10:13.466 I/O Commands 00:10:13.466 ------------ 00:10:13.466 Flush (00h): Supported LBA-Change 00:10:13.466 Write (01h): Supported LBA-Change 00:10:13.466 Read (02h): Supported 00:10:13.466 Compare (05h): Supported 00:10:13.466 Write Zeroes (08h): Supported LBA-Change 00:10:13.466 Dataset Management (09h): Supported LBA-Change 00:10:13.466 Copy (19h): Supported LBA-Change 00:10:13.466 00:10:13.466 Error Log 00:10:13.466 ========= 00:10:13.466 00:10:13.466 Arbitration 00:10:13.466 =========== 00:10:13.466 Arbitration Burst: 1 00:10:13.466 00:10:13.466 Power Management 00:10:13.466 ================ 00:10:13.466 Number of Power States: 1 00:10:13.466 Current Power State: Power State #0 00:10:13.466 Power State #0: 00:10:13.466 Max Power: 0.00 W 00:10:13.466 Non-Operational State: Operational 00:10:13.466 Entry Latency: Not Reported 00:10:13.466 Exit Latency: Not Reported 00:10:13.466 Relative Read Throughput: 0 00:10:13.466 Relative Read Latency: 0 00:10:13.466 Relative Write Throughput: 0 00:10:13.466 Relative Write Latency: 0 00:10:13.466 Idle Power: Not Reported 00:10:13.466 Active Power: Not Reported 00:10:13.466 Non-Operational Permissive Mode: Not Supported 00:10:13.466 00:10:13.466 Health Information 00:10:13.466 ================== 00:10:13.466 Critical Warnings: 00:10:13.466 Available Spare Space: OK 00:10:13.466 Temperature: OK 00:10:13.466 Device Reliability: OK 00:10:13.466 Read Only: No 00:10:13.466 Volatile Memory Backup: OK 00:10:13.466 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:13.466 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:13.466 Available Spare: 0% 00:10:13.466 Available Sp[2024-07-15 23:36:48.335045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:13.466 [2024-07-15 23:36:48.335063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:13.466 [2024-07-15 23:36:48.335107] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:13.466 [2024-07-15 23:36:48.335124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.466 [2024-07-15 23:36:48.335135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.466 [2024-07-15 23:36:48.335145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.466 [2024-07-15 23:36:48.335155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.466 [2024-07-15 23:36:48.338969] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:13.466 [2024-07-15 23:36:48.338992] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:13.466 [2024-07-15 23:36:48.339714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:13.466 [2024-07-15 23:36:48.339798] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:13.466 [2024-07-15 23:36:48.339812] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:13.466 [2024-07-15 23:36:48.340729] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:13.466 [2024-07-15 23:36:48.340752] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:13.466 [2024-07-15 23:36:48.340805] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:13.466 [2024-07-15 23:36:48.342764] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.466 are Threshold: 0% 00:10:13.466 Life Percentage Used: 0% 00:10:13.466 Data Units Read: 0 00:10:13.466 Data Units Written: 0 00:10:13.466 Host Read Commands: 0 00:10:13.466 Host Write Commands: 0 00:10:13.466 Controller Busy Time: 0 minutes 00:10:13.466 Power Cycles: 0 00:10:13.466 Power On Hours: 0 hours 00:10:13.466 Unsafe Shutdowns: 0 00:10:13.466 Unrecoverable Media Errors: 0 00:10:13.466 Lifetime Error Log Entries: 0 00:10:13.466 Warning Temperature Time: 0 minutes 00:10:13.466 Critical Temperature Time: 0 minutes 00:10:13.466 00:10:13.466 Number of Queues 00:10:13.466 ================ 00:10:13.466 Number of I/O Submission Queues: 127 00:10:13.466 Number of I/O Completion Queues: 127 00:10:13.466 00:10:13.466 Active Namespaces 00:10:13.466 ================= 00:10:13.466 Namespace ID:1 00:10:13.466 Error Recovery Timeout: Unlimited 00:10:13.466 Command Set Identifier: NVM (00h) 00:10:13.466 Deallocate: Supported 00:10:13.466 Deallocated/Unwritten Error: Not Supported 00:10:13.466 Deallocated Read Value: Unknown 00:10:13.466 Deallocate in Write Zeroes: Not Supported 00:10:13.466 Deallocated Guard Field: 0xFFFF 00:10:13.466 Flush: Supported 00:10:13.466 Reservation: Supported 00:10:13.466 Namespace Sharing Capabilities: Multiple Controllers 00:10:13.466 Size (in LBAs): 131072 (0GiB) 00:10:13.466 Capacity (in LBAs): 131072 (0GiB) 00:10:13.466 Utilization (in LBAs): 131072 (0GiB) 00:10:13.466 NGUID: 96507A39241148079E8441D61BD7C934 00:10:13.466 UUID: 96507a39-2411-4807-9e84-41d61bd7c934 00:10:13.466 Thin Provisioning: Not Supported 00:10:13.466 Per-NS Atomic Units: Yes 00:10:13.466 Atomic Boundary Size (Normal): 0 00:10:13.466 Atomic Boundary Size (PFail): 0 00:10:13.466 Atomic Boundary Offset: 0 00:10:13.466 Maximum Single Source Range Length: 65535 00:10:13.466 Maximum Copy Length: 65535 00:10:13.466 Maximum Source Range Count: 1 00:10:13.466 NGUID/EUI64 Never Reused: No 00:10:13.466 Namespace Write Protected: No 00:10:13.466 Number of LBA Formats: 1 00:10:13.466 Current LBA Format: LBA Format #00 00:10:13.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.466 00:10:13.466 23:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:13.466 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.466 [2024-07-15 23:36:48.572788] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:18.751 Initializing NVMe Controllers 00:10:18.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:18.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:18.751 Initialization complete. Launching workers. 00:10:18.751 ======================================================== 00:10:18.751 Latency(us) 00:10:18.751 Device Information : IOPS MiB/s Average min max 00:10:18.751 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35195.80 137.48 3637.58 1162.03 10272.54 00:10:18.751 ======================================================== 00:10:18.751 Total : 35195.80 137.48 3637.58 1162.03 10272.54 00:10:18.751 00:10:18.751 [2024-07-15 23:36:53.594900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:18.752 23:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:18.752 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.752 [2024-07-15 23:36:53.837048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:24.016 Initializing NVMe Controllers 00:10:24.016 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:24.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:24.016 Initialization complete. Launching workers. 00:10:24.016 ======================================================== 00:10:24.016 Latency(us) 00:10:24.016 Device Information : IOPS MiB/s Average min max 00:10:24.016 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7982.86 6842.08 11957.42 00:10:24.016 ======================================================== 00:10:24.016 Total : 16051.18 62.70 7982.86 6842.08 11957.42 00:10:24.016 00:10:24.016 [2024-07-15 23:36:58.873560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:24.016 23:36:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:24.016 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.016 [2024-07-15 23:36:59.096638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:29.276 [2024-07-15 23:37:04.163326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:29.276 Initializing NVMe Controllers 00:10:29.276 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:29.276 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:29.276 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:29.276 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:29.276 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:29.276 Initialization complete. Launching workers. 00:10:29.276 Starting thread on core 2 00:10:29.276 Starting thread on core 3 00:10:29.276 Starting thread on core 1 00:10:29.276 23:37:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:29.276 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.533 [2024-07-15 23:37:04.473403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:32.812 [2024-07-15 23:37:07.700220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:32.812 Initializing NVMe Controllers 00:10:32.812 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.812 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.812 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:32.812 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:32.812 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:32.812 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:32.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:32.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:32.812 Initialization complete. Launching workers. 00:10:32.812 Starting thread on core 1 with urgent priority queue 00:10:32.812 Starting thread on core 2 with urgent priority queue 00:10:32.812 Starting thread on core 3 with urgent priority queue 00:10:32.812 Starting thread on core 0 with urgent priority queue 00:10:32.812 SPDK bdev Controller (SPDK1 ) core 0: 4710.67 IO/s 21.23 secs/100000 ios 00:10:32.812 SPDK bdev Controller (SPDK1 ) core 1: 5296.00 IO/s 18.88 secs/100000 ios 00:10:32.812 SPDK bdev Controller (SPDK1 ) core 2: 5528.00 IO/s 18.09 secs/100000 ios 00:10:32.812 SPDK bdev Controller (SPDK1 ) core 3: 4988.00 IO/s 20.05 secs/100000 ios 00:10:32.812 ======================================================== 00:10:32.812 00:10:32.812 23:37:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:32.812 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.102 [2024-07-15 23:37:08.006528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:33.102 Initializing NVMe Controllers 00:10:33.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:33.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:33.102 Namespace ID: 1 size: 0GB 00:10:33.102 Initialization complete. 00:10:33.102 INFO: using host memory buffer for IO 00:10:33.102 Hello world! 00:10:33.102 [2024-07-15 23:37:08.040160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:33.102 23:37:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:33.102 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.359 [2024-07-15 23:37:08.336241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:34.293 Initializing NVMe Controllers 00:10:34.293 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:34.293 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:34.293 Initialization complete. Launching workers. 00:10:34.293 submit (in ns) avg, min, max = 5897.1, 3493.3, 4020832.2 00:10:34.293 complete (in ns) avg, min, max = 24302.2, 2075.6, 5013986.7 00:10:34.293 00:10:34.293 Submit histogram 00:10:34.293 ================ 00:10:34.293 Range in us Cumulative Count 00:10:34.293 3.484 - 3.508: 0.1983% ( 27) 00:10:34.293 3.508 - 3.532: 1.1459% ( 129) 00:10:34.293 3.532 - 3.556: 3.4376% ( 312) 00:10:34.293 3.556 - 3.579: 8.9467% ( 750) 00:10:34.293 3.579 - 3.603: 16.1084% ( 975) 00:10:34.293 3.603 - 3.627: 24.8494% ( 1190) 00:10:34.293 3.627 - 3.650: 33.1497% ( 1130) 00:10:34.293 3.650 - 3.674: 41.0313% ( 1073) 00:10:34.293 3.674 - 3.698: 48.2518% ( 983) 00:10:34.293 3.698 - 3.721: 55.6045% ( 1001) 00:10:34.293 3.721 - 3.745: 60.5406% ( 672) 00:10:34.293 3.745 - 3.769: 64.9405% ( 599) 00:10:34.293 3.769 - 3.793: 68.2533% ( 451) 00:10:34.293 3.793 - 3.816: 72.0288% ( 514) 00:10:34.293 3.816 - 3.840: 75.3783% ( 456) 00:10:34.293 3.840 - 3.864: 79.2420% ( 526) 00:10:34.293 3.864 - 3.887: 82.3931% ( 429) 00:10:34.293 3.887 - 3.911: 84.7290% ( 318) 00:10:34.293 3.911 - 3.935: 87.2851% ( 348) 00:10:34.293 3.935 - 3.959: 89.1656% ( 256) 00:10:34.293 3.959 - 3.982: 90.7448% ( 215) 00:10:34.293 3.982 - 4.006: 92.0890% ( 183) 00:10:34.293 4.006 - 4.030: 93.1247% ( 141) 00:10:34.293 4.030 - 4.053: 94.1457% ( 139) 00:10:34.293 4.053 - 4.077: 95.1006% ( 130) 00:10:34.293 4.077 - 4.101: 95.6148% ( 70) 00:10:34.293 4.101 - 4.124: 96.0188% ( 55) 00:10:34.293 4.124 - 4.148: 96.3640% ( 47) 00:10:34.293 4.148 - 4.172: 96.5991% ( 32) 00:10:34.293 4.172 - 4.196: 96.7313% ( 18) 00:10:34.293 4.196 - 4.219: 96.8121% ( 11) 00:10:34.293 4.219 - 4.243: 96.8856% ( 10) 00:10:34.293 4.243 - 4.267: 97.0031% ( 16) 00:10:34.293 4.267 - 4.290: 97.0765% ( 10) 00:10:34.293 4.290 - 4.314: 97.1794% ( 14) 00:10:34.293 4.314 - 4.338: 97.2234% ( 6) 00:10:34.293 4.338 - 4.361: 97.2602% ( 5) 00:10:34.293 4.361 - 4.385: 97.3042% ( 6) 00:10:34.293 4.385 - 4.409: 97.3189% ( 2) 00:10:34.293 4.409 - 4.433: 97.3410% ( 3) 00:10:34.294 4.433 - 4.456: 97.3557% ( 2) 00:10:34.294 4.456 - 4.480: 97.3630% ( 1) 00:10:34.294 4.480 - 4.504: 97.3777% ( 2) 00:10:34.294 4.504 - 4.527: 97.3850% ( 1) 00:10:34.294 4.551 - 4.575: 97.3924% ( 1) 00:10:34.294 4.575 - 4.599: 97.4071% ( 2) 00:10:34.294 4.599 - 4.622: 97.4512% ( 6) 00:10:34.294 4.622 - 4.646: 97.4585% ( 1) 00:10:34.294 4.646 - 4.670: 97.4952% ( 5) 00:10:34.294 4.670 - 4.693: 97.5393% ( 6) 00:10:34.294 4.693 - 4.717: 97.5687% ( 4) 00:10:34.294 4.717 - 4.741: 97.6348% ( 9) 00:10:34.294 4.741 - 4.764: 97.7009% ( 9) 00:10:34.294 4.764 - 4.788: 97.7523% ( 7) 00:10:34.294 4.788 - 4.812: 97.7964% ( 6) 00:10:34.294 4.812 - 4.836: 97.8478% ( 7) 00:10:34.294 4.836 - 4.859: 97.8698% ( 3) 00:10:34.294 4.859 - 4.883: 97.9139% ( 6) 00:10:34.294 4.883 - 4.907: 97.9506% ( 5) 00:10:34.294 4.907 - 4.930: 97.9800% ( 4) 00:10:34.294 4.930 - 4.954: 97.9947% ( 2) 00:10:34.294 4.954 - 4.978: 98.0167% ( 3) 00:10:34.294 4.978 - 5.001: 98.0461% ( 4) 00:10:34.294 5.001 - 5.025: 98.0608% ( 2) 00:10:34.294 5.025 - 5.049: 98.0829% ( 3) 00:10:34.294 5.049 - 5.073: 98.0975% ( 2) 00:10:34.294 5.073 - 5.096: 98.1269% ( 4) 00:10:34.294 5.120 - 5.144: 98.1490% ( 3) 00:10:34.294 5.167 - 5.191: 98.1563% ( 1) 00:10:34.294 5.191 - 5.215: 98.1857% ( 4) 00:10:34.294 5.310 - 5.333: 98.2004% ( 2) 00:10:34.294 5.357 - 5.381: 98.2077% ( 1) 00:10:34.294 5.381 - 5.404: 98.2151% ( 1) 00:10:34.294 5.476 - 5.499: 98.2224% ( 1) 00:10:34.294 5.499 - 5.523: 98.2298% ( 1) 00:10:34.294 5.570 - 5.594: 98.2371% ( 1) 00:10:34.294 5.594 - 5.618: 98.2445% ( 1) 00:10:34.294 5.689 - 5.713: 98.2518% ( 1) 00:10:34.294 5.831 - 5.855: 98.2591% ( 1) 00:10:34.294 5.855 - 5.879: 98.2665% ( 1) 00:10:34.294 5.879 - 5.902: 98.2812% ( 2) 00:10:34.294 5.902 - 5.926: 98.2959% ( 2) 00:10:34.294 5.926 - 5.950: 98.3032% ( 1) 00:10:34.294 5.973 - 5.997: 98.3106% ( 1) 00:10:34.294 5.997 - 6.021: 98.3179% ( 1) 00:10:34.294 6.068 - 6.116: 98.3326% ( 2) 00:10:34.294 6.210 - 6.258: 98.3399% ( 1) 00:10:34.294 6.258 - 6.305: 98.3473% ( 1) 00:10:34.294 6.542 - 6.590: 98.3546% ( 1) 00:10:34.294 6.590 - 6.637: 98.3620% ( 1) 00:10:34.294 6.684 - 6.732: 98.3767% ( 2) 00:10:34.294 6.874 - 6.921: 98.3840% ( 1) 00:10:34.294 6.921 - 6.969: 98.3914% ( 1) 00:10:34.294 6.969 - 7.016: 98.3987% ( 1) 00:10:34.294 7.064 - 7.111: 98.4061% ( 1) 00:10:34.294 7.111 - 7.159: 98.4134% ( 1) 00:10:34.294 7.159 - 7.206: 98.4207% ( 1) 00:10:34.294 7.253 - 7.301: 98.4354% ( 2) 00:10:34.294 7.443 - 7.490: 98.4501% ( 2) 00:10:34.294 7.633 - 7.680: 98.4648% ( 2) 00:10:34.294 7.680 - 7.727: 98.4722% ( 1) 00:10:34.294 7.822 - 7.870: 98.4795% ( 1) 00:10:34.294 7.917 - 7.964: 98.4942% ( 2) 00:10:34.294 7.964 - 8.012: 98.5089% ( 2) 00:10:34.294 8.059 - 8.107: 98.5236% ( 2) 00:10:34.294 8.107 - 8.154: 98.5456% ( 3) 00:10:34.294 8.154 - 8.201: 98.5677% ( 3) 00:10:34.294 8.201 - 8.249: 98.5750% ( 1) 00:10:34.294 8.249 - 8.296: 98.5897% ( 2) 00:10:34.294 8.296 - 8.344: 98.6044% ( 2) 00:10:34.294 8.391 - 8.439: 98.6117% ( 1) 00:10:34.294 8.439 - 8.486: 98.6264% ( 2) 00:10:34.294 8.581 - 8.628: 98.6338% ( 1) 00:10:34.294 8.770 - 8.818: 98.6411% ( 1) 00:10:34.294 8.865 - 8.913: 98.6631% ( 3) 00:10:34.294 8.913 - 8.960: 98.6852% ( 3) 00:10:34.294 8.960 - 9.007: 98.6925% ( 1) 00:10:34.294 9.055 - 9.102: 98.7219% ( 4) 00:10:34.294 9.102 - 9.150: 98.7292% ( 1) 00:10:34.294 9.292 - 9.339: 98.7439% ( 2) 00:10:34.294 9.481 - 9.529: 98.7586% ( 2) 00:10:34.294 9.719 - 9.766: 98.7660% ( 1) 00:10:34.294 9.861 - 9.908: 98.7807% ( 2) 00:10:34.294 9.956 - 10.003: 98.7880% ( 1) 00:10:34.294 10.050 - 10.098: 98.8100% ( 3) 00:10:34.294 10.193 - 10.240: 98.8174% ( 1) 00:10:34.294 10.382 - 10.430: 98.8321% ( 2) 00:10:34.294 10.572 - 10.619: 98.8468% ( 2) 00:10:34.294 10.761 - 10.809: 98.8541% ( 1) 00:10:34.294 10.951 - 10.999: 98.8615% ( 1) 00:10:34.294 10.999 - 11.046: 98.8688% ( 1) 00:10:34.294 11.046 - 11.093: 98.8762% ( 1) 00:10:34.294 11.093 - 11.141: 98.8835% ( 1) 00:10:34.294 11.141 - 11.188: 98.8982% ( 2) 00:10:34.294 11.188 - 11.236: 98.9055% ( 1) 00:10:34.294 11.330 - 11.378: 98.9129% ( 1) 00:10:34.294 11.473 - 11.520: 98.9202% ( 1) 00:10:34.294 11.615 - 11.662: 98.9276% ( 1) 00:10:34.294 11.852 - 11.899: 98.9349% ( 1) 00:10:34.294 11.899 - 11.947: 98.9423% ( 1) 00:10:34.294 12.041 - 12.089: 98.9496% ( 1) 00:10:34.294 12.516 - 12.610: 98.9570% ( 1) 00:10:34.294 12.895 - 12.990: 98.9643% ( 1) 00:10:34.294 13.274 - 13.369: 98.9716% ( 1) 00:10:34.294 13.369 - 13.464: 98.9790% ( 1) 00:10:34.294 13.559 - 13.653: 98.9863% ( 1) 00:10:34.294 13.653 - 13.748: 98.9937% ( 1) 00:10:34.294 13.938 - 14.033: 99.0084% ( 2) 00:10:34.294 14.033 - 14.127: 99.0304% ( 3) 00:10:34.294 14.507 - 14.601: 99.0378% ( 1) 00:10:34.294 14.601 - 14.696: 99.0524% ( 2) 00:10:34.294 14.696 - 14.791: 99.0598% ( 1) 00:10:34.294 14.791 - 14.886: 99.0671% ( 1) 00:10:34.294 15.076 - 15.170: 99.0745% ( 1) 00:10:34.294 15.360 - 15.455: 99.0818% ( 1) 00:10:34.294 15.550 - 15.644: 99.0892% ( 1) 00:10:34.294 15.644 - 15.739: 99.0965% ( 1) 00:10:34.294 17.161 - 17.256: 99.1112% ( 2) 00:10:34.294 17.256 - 17.351: 99.1332% ( 3) 00:10:34.294 17.351 - 17.446: 99.1479% ( 2) 00:10:34.294 17.446 - 17.541: 99.1920% ( 6) 00:10:34.294 17.541 - 17.636: 99.2508% ( 8) 00:10:34.294 17.636 - 17.730: 99.2728% ( 3) 00:10:34.294 17.730 - 17.825: 99.3169% ( 6) 00:10:34.294 17.825 - 17.920: 99.3463% ( 4) 00:10:34.294 17.920 - 18.015: 99.4124% ( 9) 00:10:34.294 18.015 - 18.110: 99.4638% ( 7) 00:10:34.294 18.110 - 18.204: 99.5079% ( 6) 00:10:34.294 18.204 - 18.299: 99.5666% ( 8) 00:10:34.294 18.299 - 18.394: 99.5960% ( 4) 00:10:34.294 18.394 - 18.489: 99.6548% ( 8) 00:10:34.294 18.489 - 18.584: 99.6988% ( 6) 00:10:34.294 18.584 - 18.679: 99.7649% ( 9) 00:10:34.294 18.679 - 18.773: 99.7796% ( 2) 00:10:34.294 18.773 - 18.868: 99.8017% ( 3) 00:10:34.294 18.868 - 18.963: 99.8237% ( 3) 00:10:34.294 18.963 - 19.058: 99.8531% ( 4) 00:10:34.294 19.058 - 19.153: 99.8678% ( 2) 00:10:34.294 19.247 - 19.342: 99.8751% ( 1) 00:10:34.294 19.437 - 19.532: 99.8972% ( 3) 00:10:34.294 21.144 - 21.239: 99.9045% ( 1) 00:10:34.294 21.428 - 21.523: 99.9119% ( 1) 00:10:34.294 23.609 - 23.704: 99.9192% ( 1) 00:10:34.294 24.462 - 24.652: 99.9265% ( 1) 00:10:34.294 25.790 - 25.979: 99.9339% ( 1) 00:10:34.294 26.548 - 26.738: 99.9412% ( 1) 00:10:34.294 28.444 - 28.634: 99.9486% ( 1) 00:10:34.294 3009.801 - 3021.938: 99.9559% ( 1) 00:10:34.294 3883.615 - 3907.887: 99.9633% ( 1) 00:10:34.294 3980.705 - 4004.978: 99.9927% ( 4) 00:10:34.294 4004.978 - 4029.250: 100.0000% ( 1) 00:10:34.294 00:10:34.294 Complete histogram 00:10:34.294 ================== 00:10:34.294 Range in us Cumulative Count 00:10:34.294 2.074 - 2.086: 8.4105% ( 1145) 00:10:34.294 2.086 - 2.098: 38.9379% ( 4156) 00:10:34.294 2.098 - 2.110: 41.8613% ( 398) 00:10:34.294 2.110 - 2.121: 51.7482% ( 1346) 00:10:34.294 2.121 - 2.133: 59.2111% ( 1016) 00:10:34.294 2.133 - 2.145: 60.5480% ( 182) 00:10:34.294 2.145 - 2.157: 69.9354% ( 1278) 00:10:34.294 2.157 - 2.169: 78.7792% ( 1204) 00:10:34.294 2.169 - 2.181: 79.9618% ( 161) 00:10:34.294 2.181 - 2.193: 84.5306% ( 622) 00:10:34.294 2.193 - 2.204: 87.5496% ( 411) 00:10:34.294 2.204 - 2.216: 88.2327% ( 93) 00:10:34.294 2.216 - 2.228: 89.9221% ( 230) 00:10:34.294 2.228 - 2.240: 92.3167% ( 326) 00:10:34.294 2.240 - 2.252: 93.7858% ( 200) 00:10:34.294 2.252 - 2.264: 94.6526% ( 118) 00:10:34.294 2.264 - 2.276: 95.1814% ( 72) 00:10:34.294 2.276 - 2.287: 95.4459% ( 36) 00:10:34.294 2.287 - 2.299: 95.6222% ( 24) 00:10:34.294 2.299 - 2.311: 95.9307% ( 42) 00:10:34.294 2.311 - 2.323: 96.2759% ( 47) 00:10:34.294 2.323 - 2.335: 96.3714% ( 13) 00:10:34.294 2.335 - 2.347: 96.4008% ( 4) 00:10:34.294 2.347 - 2.359: 96.4155% ( 2) 00:10:34.294 2.359 - 2.370: 96.4375% ( 3) 00:10:34.294 2.370 - 2.382: 96.5036% ( 9) 00:10:34.294 2.382 - 2.394: 96.7240% ( 30) 00:10:34.294 2.394 - 2.406: 97.0325% ( 42) 00:10:34.294 2.406 - 2.418: 97.2675% ( 32) 00:10:34.294 2.418 - 2.430: 97.4952% ( 31) 00:10:34.294 2.430 - 2.441: 97.7229% ( 31) 00:10:34.294 2.441 - 2.453: 97.8625% ( 19) 00:10:34.294 2.453 - 2.465: 98.0241% ( 22) 00:10:34.294 2.465 - 2.477: 98.0755% ( 7) 00:10:34.294 2.477 - 2.489: 98.2151% ( 19) 00:10:34.294 2.489 - 2.501: 98.2591% ( 6) 00:10:34.294 2.501 - 2.513: 98.3106% ( 7) 00:10:34.294 2.513 - 2.524: 98.3473% ( 5) 00:10:34.294 2.524 - 2.536: 98.3840% ( 5) 00:10:34.294 2.536 - 2.548: 98.4354% ( 7) 00:10:34.294 2.560 - 2.572: 98.4428% ( 1) 00:10:34.294 2.572 - 2.584: 98.4501% ( 1) 00:10:34.295 2.584 - 2.596: 98.4575% ( 1) 00:10:34.295 2.607 - 2.619: 98.4648% ( 1) 00:10:34.295 2.631 - 2.643: 98.4869% ( 3) 00:10:34.295 2.643 - 2.655: 98.5089% ( 3) 00:10:34.295 2.726 - 2.738: 98.5162% ( 1) 00:10:34.295 2.750 - 2.761: 98.5236% ( 1) 00:10:34.295 2.963 - 2.975: 98.5309% ( 1) 00:10:34.295 3.366 - 3.390: 98.5383% ( 1) 00:10:34.295 3.413 - 3.437: 98.5456% ( 1) 00:10:34.295 3.437 - 3.461: 98.5603% ( 2) 00:10:34.295 3.461 - 3.484: 98.5677% ( 1) 00:10:34.295 3.484 - 3.508: 98.5823% ( 2) 00:10:34.295 3.508 - 3.532: 98.6044% ( 3) 00:10:34.295 3.532 - 3.556: 98.6264% ( 3) 00:10:34.295 3.556 - 3.579: 98.6338% ( 1) 00:10:34.295 3.579 - 3.603: 98.6485% ( 2) 00:10:34.295 3.603 - 3.627: 98.6558% ( 1) 00:10:34.295 3.627 - 3.650: 98.6631% ( 1) 00:10:34.295 3.650 - 3.674: 9[2024-07-15 23:37:09.357338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:34.295 8.6705% ( 1) 00:10:34.295 3.674 - 3.698: 98.6778% ( 1) 00:10:34.295 3.745 - 3.769: 98.6852% ( 1) 00:10:34.295 3.769 - 3.793: 98.7072% ( 3) 00:10:34.295 3.793 - 3.816: 98.7146% ( 1) 00:10:34.295 3.816 - 3.840: 98.7292% ( 2) 00:10:34.295 3.864 - 3.887: 98.7366% ( 1) 00:10:34.295 3.887 - 3.911: 98.7439% ( 1) 00:10:34.295 3.911 - 3.935: 98.7513% ( 1) 00:10:34.295 3.959 - 3.982: 98.7660% ( 2) 00:10:34.295 3.982 - 4.006: 98.7733% ( 1) 00:10:34.295 4.077 - 4.101: 98.7807% ( 1) 00:10:34.295 4.148 - 4.172: 98.7880% ( 1) 00:10:34.295 5.381 - 5.404: 98.7954% ( 1) 00:10:34.295 5.618 - 5.641: 98.8027% ( 1) 00:10:34.295 5.665 - 5.689: 98.8100% ( 1) 00:10:34.295 6.210 - 6.258: 98.8174% ( 1) 00:10:34.295 6.353 - 6.400: 98.8247% ( 1) 00:10:34.295 6.400 - 6.447: 98.8394% ( 2) 00:10:34.295 6.447 - 6.495: 98.8468% ( 1) 00:10:34.295 6.637 - 6.684: 98.8688% ( 3) 00:10:34.295 6.684 - 6.732: 98.8835% ( 2) 00:10:34.295 6.732 - 6.779: 98.8982% ( 2) 00:10:34.295 6.779 - 6.827: 98.9129% ( 2) 00:10:34.295 7.159 - 7.206: 98.9202% ( 1) 00:10:34.295 7.490 - 7.538: 98.9276% ( 1) 00:10:34.295 7.633 - 7.680: 98.9349% ( 1) 00:10:34.295 7.727 - 7.775: 98.9423% ( 1) 00:10:34.295 8.107 - 8.154: 98.9496% ( 1) 00:10:34.295 8.154 - 8.201: 98.9570% ( 1) 00:10:34.295 9.055 - 9.102: 98.9643% ( 1) 00:10:34.295 10.667 - 10.714: 98.9716% ( 1) 00:10:34.295 11.046 - 11.093: 98.9790% ( 1) 00:10:34.295 15.644 - 15.739: 98.9863% ( 1) 00:10:34.295 15.739 - 15.834: 98.9937% ( 1) 00:10:34.295 15.834 - 15.929: 99.0157% ( 3) 00:10:34.295 15.929 - 16.024: 99.0451% ( 4) 00:10:34.295 16.024 - 16.119: 99.0598% ( 2) 00:10:34.295 16.119 - 16.213: 99.0745% ( 2) 00:10:34.295 16.213 - 16.308: 99.0892% ( 2) 00:10:34.295 16.308 - 16.403: 99.1259% ( 5) 00:10:34.295 16.403 - 16.498: 99.1553% ( 4) 00:10:34.295 16.498 - 16.593: 99.1626% ( 1) 00:10:34.295 16.593 - 16.687: 99.2067% ( 6) 00:10:34.295 16.687 - 16.782: 99.2361% ( 4) 00:10:34.295 16.782 - 16.877: 99.3022% ( 9) 00:10:34.295 16.877 - 16.972: 99.3242% ( 3) 00:10:34.295 16.972 - 17.067: 99.3316% ( 1) 00:10:34.295 17.161 - 17.256: 99.3536% ( 3) 00:10:34.295 17.256 - 17.351: 99.3756% ( 3) 00:10:34.295 17.351 - 17.446: 99.3830% ( 1) 00:10:34.295 17.636 - 17.730: 99.3903% ( 1) 00:10:34.295 17.730 - 17.825: 99.4050% ( 2) 00:10:34.295 17.825 - 17.920: 99.4124% ( 1) 00:10:34.295 18.110 - 18.204: 99.4197% ( 1) 00:10:34.295 18.394 - 18.489: 99.4271% ( 1) 00:10:34.295 20.385 - 20.480: 99.4344% ( 1) 00:10:34.295 21.049 - 21.144: 99.4418% ( 1) 00:10:34.295 83.058 - 83.437: 99.4491% ( 1) 00:10:34.295 3021.938 - 3034.074: 99.4564% ( 1) 00:10:34.295 3543.799 - 3568.071: 99.4638% ( 1) 00:10:34.295 3980.705 - 4004.978: 99.9265% ( 63) 00:10:34.295 4004.978 - 4029.250: 99.9853% ( 8) 00:10:34.295 4975.881 - 5000.154: 99.9927% ( 1) 00:10:34.295 5000.154 - 5024.427: 100.0000% ( 1) 00:10:34.295 00:10:34.295 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:34.295 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:34.295 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:34.295 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:34.295 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:34.861 [ 00:10:34.861 { 00:10:34.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:34.861 "subtype": "Discovery", 00:10:34.861 "listen_addresses": [], 00:10:34.861 "allow_any_host": true, 00:10:34.861 "hosts": [] 00:10:34.861 }, 00:10:34.861 { 00:10:34.861 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:34.861 "subtype": "NVMe", 00:10:34.861 "listen_addresses": [ 00:10:34.861 { 00:10:34.861 "trtype": "VFIOUSER", 00:10:34.861 "adrfam": "IPv4", 00:10:34.861 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:34.861 "trsvcid": "0" 00:10:34.861 } 00:10:34.861 ], 00:10:34.861 "allow_any_host": true, 00:10:34.861 "hosts": [], 00:10:34.861 "serial_number": "SPDK1", 00:10:34.861 "model_number": "SPDK bdev Controller", 00:10:34.861 "max_namespaces": 32, 00:10:34.861 "min_cntlid": 1, 00:10:34.861 "max_cntlid": 65519, 00:10:34.861 "namespaces": [ 00:10:34.861 { 00:10:34.861 "nsid": 1, 00:10:34.861 "bdev_name": "Malloc1", 00:10:34.861 "name": "Malloc1", 00:10:34.861 "nguid": "96507A39241148079E8441D61BD7C934", 00:10:34.861 "uuid": "96507a39-2411-4807-9e84-41d61bd7c934" 00:10:34.861 } 00:10:34.861 ] 00:10:34.861 }, 00:10:34.861 { 00:10:34.861 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:34.861 "subtype": "NVMe", 00:10:34.861 "listen_addresses": [ 00:10:34.861 { 00:10:34.861 "trtype": "VFIOUSER", 00:10:34.861 "adrfam": "IPv4", 00:10:34.861 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:34.861 "trsvcid": "0" 00:10:34.861 } 00:10:34.861 ], 00:10:34.861 "allow_any_host": true, 00:10:34.861 "hosts": [], 00:10:34.861 "serial_number": "SPDK2", 00:10:34.861 "model_number": "SPDK bdev Controller", 00:10:34.861 "max_namespaces": 32, 00:10:34.861 "min_cntlid": 1, 00:10:34.861 "max_cntlid": 65519, 00:10:34.861 "namespaces": [ 00:10:34.861 { 00:10:34.861 "nsid": 1, 00:10:34.861 "bdev_name": "Malloc2", 00:10:34.861 "name": "Malloc2", 00:10:34.861 "nguid": "5CBD549A0BEB4CC4A50FECDF195ADA09", 00:10:34.861 "uuid": "5cbd549a-0beb-4cc4-a50f-ecdf195ada09" 00:10:34.861 } 00:10:34.861 ] 00:10:34.861 } 00:10:34.861 ] 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3730573 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:34.861 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:34.861 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.861 [2024-07-15 23:37:09.861425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:34.861 Malloc3 00:10:35.119 23:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:35.119 [2024-07-15 23:37:10.214997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:35.119 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:35.377 Asynchronous Event Request test 00:10:35.377 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:35.377 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:35.377 Registering asynchronous event callbacks... 00:10:35.377 Starting namespace attribute notice tests for all controllers... 00:10:35.377 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:35.377 aer_cb - Changed Namespace 00:10:35.377 Cleaning up... 00:10:35.377 [ 00:10:35.377 { 00:10:35.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:35.377 "subtype": "Discovery", 00:10:35.377 "listen_addresses": [], 00:10:35.377 "allow_any_host": true, 00:10:35.377 "hosts": [] 00:10:35.377 }, 00:10:35.377 { 00:10:35.377 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:35.377 "subtype": "NVMe", 00:10:35.377 "listen_addresses": [ 00:10:35.377 { 00:10:35.377 "trtype": "VFIOUSER", 00:10:35.377 "adrfam": "IPv4", 00:10:35.377 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:35.377 "trsvcid": "0" 00:10:35.377 } 00:10:35.377 ], 00:10:35.377 "allow_any_host": true, 00:10:35.377 "hosts": [], 00:10:35.377 "serial_number": "SPDK1", 00:10:35.377 "model_number": "SPDK bdev Controller", 00:10:35.377 "max_namespaces": 32, 00:10:35.377 "min_cntlid": 1, 00:10:35.377 "max_cntlid": 65519, 00:10:35.377 "namespaces": [ 00:10:35.377 { 00:10:35.377 "nsid": 1, 00:10:35.377 "bdev_name": "Malloc1", 00:10:35.377 "name": "Malloc1", 00:10:35.377 "nguid": "96507A39241148079E8441D61BD7C934", 00:10:35.377 "uuid": "96507a39-2411-4807-9e84-41d61bd7c934" 00:10:35.377 }, 00:10:35.377 { 00:10:35.377 "nsid": 2, 00:10:35.377 "bdev_name": "Malloc3", 00:10:35.377 "name": "Malloc3", 00:10:35.377 "nguid": "D812FCDAAF924FF28FA15D0767F3C7A5", 00:10:35.377 "uuid": "d812fcda-af92-4ff2-8fa1-5d0767f3c7a5" 00:10:35.377 } 00:10:35.377 ] 00:10:35.377 }, 00:10:35.377 { 00:10:35.377 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:35.377 "subtype": "NVMe", 00:10:35.377 "listen_addresses": [ 00:10:35.377 { 00:10:35.377 "trtype": "VFIOUSER", 00:10:35.377 "adrfam": "IPv4", 00:10:35.377 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:35.377 "trsvcid": "0" 00:10:35.377 } 00:10:35.377 ], 00:10:35.377 "allow_any_host": true, 00:10:35.377 "hosts": [], 00:10:35.377 "serial_number": "SPDK2", 00:10:35.377 "model_number": "SPDK bdev Controller", 00:10:35.377 "max_namespaces": 32, 00:10:35.377 "min_cntlid": 1, 00:10:35.377 "max_cntlid": 65519, 00:10:35.377 "namespaces": [ 00:10:35.377 { 00:10:35.377 "nsid": 1, 00:10:35.377 "bdev_name": "Malloc2", 00:10:35.377 "name": "Malloc2", 00:10:35.377 "nguid": "5CBD549A0BEB4CC4A50FECDF195ADA09", 00:10:35.377 "uuid": "5cbd549a-0beb-4cc4-a50f-ecdf195ada09" 00:10:35.377 } 00:10:35.377 ] 00:10:35.377 } 00:10:35.377 ] 00:10:35.377 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3730573 00:10:35.377 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:35.377 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:35.377 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:35.377 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:35.638 [2024-07-15 23:37:10.509372] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:10:35.638 [2024-07-15 23:37:10.509415] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730710 ] 00:10:35.638 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.638 [2024-07-15 23:37:10.542081] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:35.638 [2024-07-15 23:37:10.547406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:35.638 [2024-07-15 23:37:10.547436] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f904cb76000 00:10:35.638 [2024-07-15 23:37:10.548411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.549414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.550424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.551433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.552441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.553443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.554448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.555457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.638 [2024-07-15 23:37:10.556471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:35.638 [2024-07-15 23:37:10.556499] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f904cb6b000 00:10:35.638 [2024-07-15 23:37:10.557638] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:35.638 [2024-07-15 23:37:10.572443] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:35.638 [2024-07-15 23:37:10.572480] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:35.638 [2024-07-15 23:37:10.574577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:35.638 [2024-07-15 23:37:10.574626] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:35.638 [2024-07-15 23:37:10.574708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:35.638 [2024-07-15 23:37:10.574732] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:35.638 [2024-07-15 23:37:10.574743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:35.638 [2024-07-15 23:37:10.575585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:35.638 [2024-07-15 23:37:10.575605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:35.638 [2024-07-15 23:37:10.575618] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:35.638 [2024-07-15 23:37:10.576593] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:35.638 [2024-07-15 23:37:10.576613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:35.639 [2024-07-15 23:37:10.576627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.577600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:35.639 [2024-07-15 23:37:10.577619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.578610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:35.639 [2024-07-15 23:37:10.578630] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:35.639 [2024-07-15 23:37:10.578639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.578650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.578759] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:35.639 [2024-07-15 23:37:10.578767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.578775] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:35.639 [2024-07-15 23:37:10.579617] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:35.639 [2024-07-15 23:37:10.580625] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:35.639 [2024-07-15 23:37:10.581633] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:35.639 [2024-07-15 23:37:10.582627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.639 [2024-07-15 23:37:10.582707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:35.639 [2024-07-15 23:37:10.586966] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:35.639 [2024-07-15 23:37:10.586987] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:35.639 [2024-07-15 23:37:10.586997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.587021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:35.639 [2024-07-15 23:37:10.587039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.587060] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.639 [2024-07-15 23:37:10.587070] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.639 [2024-07-15 23:37:10.587088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.594971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.594994] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:35.639 [2024-07-15 23:37:10.595007] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:35.639 [2024-07-15 23:37:10.595016] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:35.639 [2024-07-15 23:37:10.595024] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:35.639 [2024-07-15 23:37:10.595032] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:35.639 [2024-07-15 23:37:10.595040] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:35.639 [2024-07-15 23:37:10.595048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.595060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.595076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.602967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.602995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.639 [2024-07-15 23:37:10.603010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.639 [2024-07-15 23:37:10.603026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.639 [2024-07-15 23:37:10.603039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.639 [2024-07-15 23:37:10.603048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.603063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.603078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.610967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.610985] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:35.639 [2024-07-15 23:37:10.610994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.611006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.611016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.611030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.615981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.616073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.616092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.616106] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:35.639 [2024-07-15 23:37:10.616115] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:35.639 [2024-07-15 23:37:10.616125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.626968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.626990] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:35.639 [2024-07-15 23:37:10.627006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.627021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.627034] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.639 [2024-07-15 23:37:10.627042] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.639 [2024-07-15 23:37:10.627052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.634965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.635007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.635027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.635041] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.639 [2024-07-15 23:37:10.635049] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.639 [2024-07-15 23:37:10.635059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.642967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.642988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643049] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:35.639 [2024-07-15 23:37:10.643056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:35.639 [2024-07-15 23:37:10.643065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:35.639 [2024-07-15 23:37:10.643089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.650967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.650994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.658967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.658992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.666968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:35.639 [2024-07-15 23:37:10.666993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:35.639 [2024-07-15 23:37:10.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:35.640 [2024-07-15 23:37:10.674998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:35.640 [2024-07-15 23:37:10.675009] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:35.640 [2024-07-15 23:37:10.675016] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:35.640 [2024-07-15 23:37:10.675025] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:35.640 [2024-07-15 23:37:10.675035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:35.640 [2024-07-15 23:37:10.675048] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:35.640 [2024-07-15 23:37:10.675056] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:35.640 [2024-07-15 23:37:10.675065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:35.640 [2024-07-15 23:37:10.675076] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:35.640 [2024-07-15 23:37:10.675084] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.640 [2024-07-15 23:37:10.675093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.640 [2024-07-15 23:37:10.675105] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:35.640 [2024-07-15 23:37:10.675113] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:35.640 [2024-07-15 23:37:10.675122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:35.640 [2024-07-15 23:37:10.682965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:35.640 [2024-07-15 23:37:10.682993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:35.640 [2024-07-15 23:37:10.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:35.640 [2024-07-15 23:37:10.683023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:35.640 ===================================================== 00:10:35.640 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:35.640 ===================================================== 00:10:35.640 Controller Capabilities/Features 00:10:35.640 ================================ 00:10:35.640 Vendor ID: 4e58 00:10:35.640 Subsystem Vendor ID: 4e58 00:10:35.640 Serial Number: SPDK2 00:10:35.640 Model Number: SPDK bdev Controller 00:10:35.640 Firmware Version: 24.09 00:10:35.640 Recommended Arb Burst: 6 00:10:35.640 IEEE OUI Identifier: 8d 6b 50 00:10:35.640 Multi-path I/O 00:10:35.640 May have multiple subsystem ports: Yes 00:10:35.640 May have multiple controllers: Yes 00:10:35.640 Associated with SR-IOV VF: No 00:10:35.640 Max Data Transfer Size: 131072 00:10:35.640 Max Number of Namespaces: 32 00:10:35.640 Max Number of I/O Queues: 127 00:10:35.640 NVMe Specification Version (VS): 1.3 00:10:35.640 NVMe Specification Version (Identify): 1.3 00:10:35.640 Maximum Queue Entries: 256 00:10:35.640 Contiguous Queues Required: Yes 00:10:35.640 Arbitration Mechanisms Supported 00:10:35.640 Weighted Round Robin: Not Supported 00:10:35.640 Vendor Specific: Not Supported 00:10:35.640 Reset Timeout: 15000 ms 00:10:35.640 Doorbell Stride: 4 bytes 00:10:35.640 NVM Subsystem Reset: Not Supported 00:10:35.640 Command Sets Supported 00:10:35.640 NVM Command Set: Supported 00:10:35.640 Boot Partition: Not Supported 00:10:35.640 Memory Page Size Minimum: 4096 bytes 00:10:35.640 Memory Page Size Maximum: 4096 bytes 00:10:35.640 Persistent Memory Region: Not Supported 00:10:35.640 Optional Asynchronous Events Supported 00:10:35.640 Namespace Attribute Notices: Supported 00:10:35.640 Firmware Activation Notices: Not Supported 00:10:35.640 ANA Change Notices: Not Supported 00:10:35.640 PLE Aggregate Log Change Notices: Not Supported 00:10:35.640 LBA Status Info Alert Notices: Not Supported 00:10:35.640 EGE Aggregate Log Change Notices: Not Supported 00:10:35.640 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.640 Zone Descriptor Change Notices: Not Supported 00:10:35.640 Discovery Log Change Notices: Not Supported 00:10:35.640 Controller Attributes 00:10:35.640 128-bit Host Identifier: Supported 00:10:35.640 Non-Operational Permissive Mode: Not Supported 00:10:35.640 NVM Sets: Not Supported 00:10:35.640 Read Recovery Levels: Not Supported 00:10:35.640 Endurance Groups: Not Supported 00:10:35.640 Predictable Latency Mode: Not Supported 00:10:35.640 Traffic Based Keep ALive: Not Supported 00:10:35.640 Namespace Granularity: Not Supported 00:10:35.640 SQ Associations: Not Supported 00:10:35.640 UUID List: Not Supported 00:10:35.640 Multi-Domain Subsystem: Not Supported 00:10:35.640 Fixed Capacity Management: Not Supported 00:10:35.640 Variable Capacity Management: Not Supported 00:10:35.640 Delete Endurance Group: Not Supported 00:10:35.640 Delete NVM Set: Not Supported 00:10:35.640 Extended LBA Formats Supported: Not Supported 00:10:35.640 Flexible Data Placement Supported: Not Supported 00:10:35.640 00:10:35.640 Controller Memory Buffer Support 00:10:35.640 ================================ 00:10:35.640 Supported: No 00:10:35.640 00:10:35.640 Persistent Memory Region Support 00:10:35.640 ================================ 00:10:35.640 Supported: No 00:10:35.640 00:10:35.640 Admin Command Set Attributes 00:10:35.640 ============================ 00:10:35.640 Security Send/Receive: Not Supported 00:10:35.640 Format NVM: Not Supported 00:10:35.640 Firmware Activate/Download: Not Supported 00:10:35.640 Namespace Management: Not Supported 00:10:35.640 Device Self-Test: Not Supported 00:10:35.640 Directives: Not Supported 00:10:35.640 NVMe-MI: Not Supported 00:10:35.640 Virtualization Management: Not Supported 00:10:35.640 Doorbell Buffer Config: Not Supported 00:10:35.640 Get LBA Status Capability: Not Supported 00:10:35.640 Command & Feature Lockdown Capability: Not Supported 00:10:35.640 Abort Command Limit: 4 00:10:35.640 Async Event Request Limit: 4 00:10:35.640 Number of Firmware Slots: N/A 00:10:35.640 Firmware Slot 1 Read-Only: N/A 00:10:35.640 Firmware Activation Without Reset: N/A 00:10:35.640 Multiple Update Detection Support: N/A 00:10:35.640 Firmware Update Granularity: No Information Provided 00:10:35.640 Per-Namespace SMART Log: No 00:10:35.640 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.640 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:35.640 Command Effects Log Page: Supported 00:10:35.640 Get Log Page Extended Data: Supported 00:10:35.640 Telemetry Log Pages: Not Supported 00:10:35.640 Persistent Event Log Pages: Not Supported 00:10:35.640 Supported Log Pages Log Page: May Support 00:10:35.640 Commands Supported & Effects Log Page: Not Supported 00:10:35.640 Feature Identifiers & Effects Log Page:May Support 00:10:35.640 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.640 Data Area 4 for Telemetry Log: Not Supported 00:10:35.640 Error Log Page Entries Supported: 128 00:10:35.640 Keep Alive: Supported 00:10:35.640 Keep Alive Granularity: 10000 ms 00:10:35.640 00:10:35.640 NVM Command Set Attributes 00:10:35.640 ========================== 00:10:35.640 Submission Queue Entry Size 00:10:35.640 Max: 64 00:10:35.640 Min: 64 00:10:35.640 Completion Queue Entry Size 00:10:35.640 Max: 16 00:10:35.640 Min: 16 00:10:35.640 Number of Namespaces: 32 00:10:35.640 Compare Command: Supported 00:10:35.640 Write Uncorrectable Command: Not Supported 00:10:35.640 Dataset Management Command: Supported 00:10:35.640 Write Zeroes Command: Supported 00:10:35.640 Set Features Save Field: Not Supported 00:10:35.640 Reservations: Not Supported 00:10:35.640 Timestamp: Not Supported 00:10:35.640 Copy: Supported 00:10:35.640 Volatile Write Cache: Present 00:10:35.640 Atomic Write Unit (Normal): 1 00:10:35.640 Atomic Write Unit (PFail): 1 00:10:35.640 Atomic Compare & Write Unit: 1 00:10:35.640 Fused Compare & Write: Supported 00:10:35.640 Scatter-Gather List 00:10:35.640 SGL Command Set: Supported (Dword aligned) 00:10:35.640 SGL Keyed: Not Supported 00:10:35.640 SGL Bit Bucket Descriptor: Not Supported 00:10:35.640 SGL Metadata Pointer: Not Supported 00:10:35.640 Oversized SGL: Not Supported 00:10:35.640 SGL Metadata Address: Not Supported 00:10:35.640 SGL Offset: Not Supported 00:10:35.640 Transport SGL Data Block: Not Supported 00:10:35.640 Replay Protected Memory Block: Not Supported 00:10:35.640 00:10:35.640 Firmware Slot Information 00:10:35.640 ========================= 00:10:35.640 Active slot: 1 00:10:35.640 Slot 1 Firmware Revision: 24.09 00:10:35.640 00:10:35.640 00:10:35.640 Commands Supported and Effects 00:10:35.640 ============================== 00:10:35.640 Admin Commands 00:10:35.640 -------------- 00:10:35.640 Get Log Page (02h): Supported 00:10:35.640 Identify (06h): Supported 00:10:35.640 Abort (08h): Supported 00:10:35.640 Set Features (09h): Supported 00:10:35.640 Get Features (0Ah): Supported 00:10:35.640 Asynchronous Event Request (0Ch): Supported 00:10:35.640 Keep Alive (18h): Supported 00:10:35.640 I/O Commands 00:10:35.640 ------------ 00:10:35.640 Flush (00h): Supported LBA-Change 00:10:35.640 Write (01h): Supported LBA-Change 00:10:35.640 Read (02h): Supported 00:10:35.640 Compare (05h): Supported 00:10:35.640 Write Zeroes (08h): Supported LBA-Change 00:10:35.640 Dataset Management (09h): Supported LBA-Change 00:10:35.640 Copy (19h): Supported LBA-Change 00:10:35.640 00:10:35.640 Error Log 00:10:35.641 ========= 00:10:35.641 00:10:35.641 Arbitration 00:10:35.641 =========== 00:10:35.641 Arbitration Burst: 1 00:10:35.641 00:10:35.641 Power Management 00:10:35.641 ================ 00:10:35.641 Number of Power States: 1 00:10:35.641 Current Power State: Power State #0 00:10:35.641 Power State #0: 00:10:35.641 Max Power: 0.00 W 00:10:35.641 Non-Operational State: Operational 00:10:35.641 Entry Latency: Not Reported 00:10:35.641 Exit Latency: Not Reported 00:10:35.641 Relative Read Throughput: 0 00:10:35.641 Relative Read Latency: 0 00:10:35.641 Relative Write Throughput: 0 00:10:35.641 Relative Write Latency: 0 00:10:35.641 Idle Power: Not Reported 00:10:35.641 Active Power: Not Reported 00:10:35.641 Non-Operational Permissive Mode: Not Supported 00:10:35.641 00:10:35.641 Health Information 00:10:35.641 ================== 00:10:35.641 Critical Warnings: 00:10:35.641 Available Spare Space: OK 00:10:35.641 Temperature: OK 00:10:35.641 Device Reliability: OK 00:10:35.641 Read Only: No 00:10:35.641 Volatile Memory Backup: OK 00:10:35.641 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:35.641 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:35.641 Available Spare: 0% 00:10:35.641 Available Sp[2024-07-15 23:37:10.683138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:35.641 [2024-07-15 23:37:10.690966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:35.641 [2024-07-15 23:37:10.691017] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:35.641 [2024-07-15 23:37:10.691035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.641 [2024-07-15 23:37:10.691046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.641 [2024-07-15 23:37:10.691056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.641 [2024-07-15 23:37:10.691066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.641 [2024-07-15 23:37:10.691147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:35.641 [2024-07-15 23:37:10.691169] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:35.641 [2024-07-15 23:37:10.692152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.641 [2024-07-15 23:37:10.692222] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:35.641 [2024-07-15 23:37:10.692238] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:35.641 [2024-07-15 23:37:10.693164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:35.641 [2024-07-15 23:37:10.693189] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:35.641 [2024-07-15 23:37:10.693242] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:35.641 [2024-07-15 23:37:10.694488] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:35.641 are Threshold: 0% 00:10:35.641 Life Percentage Used: 0% 00:10:35.641 Data Units Read: 0 00:10:35.641 Data Units Written: 0 00:10:35.641 Host Read Commands: 0 00:10:35.641 Host Write Commands: 0 00:10:35.641 Controller Busy Time: 0 minutes 00:10:35.641 Power Cycles: 0 00:10:35.641 Power On Hours: 0 hours 00:10:35.641 Unsafe Shutdowns: 0 00:10:35.641 Unrecoverable Media Errors: 0 00:10:35.641 Lifetime Error Log Entries: 0 00:10:35.641 Warning Temperature Time: 0 minutes 00:10:35.641 Critical Temperature Time: 0 minutes 00:10:35.641 00:10:35.641 Number of Queues 00:10:35.641 ================ 00:10:35.641 Number of I/O Submission Queues: 127 00:10:35.641 Number of I/O Completion Queues: 127 00:10:35.641 00:10:35.641 Active Namespaces 00:10:35.641 ================= 00:10:35.641 Namespace ID:1 00:10:35.641 Error Recovery Timeout: Unlimited 00:10:35.641 Command Set Identifier: NVM (00h) 00:10:35.641 Deallocate: Supported 00:10:35.641 Deallocated/Unwritten Error: Not Supported 00:10:35.641 Deallocated Read Value: Unknown 00:10:35.641 Deallocate in Write Zeroes: Not Supported 00:10:35.641 Deallocated Guard Field: 0xFFFF 00:10:35.641 Flush: Supported 00:10:35.641 Reservation: Supported 00:10:35.641 Namespace Sharing Capabilities: Multiple Controllers 00:10:35.641 Size (in LBAs): 131072 (0GiB) 00:10:35.641 Capacity (in LBAs): 131072 (0GiB) 00:10:35.641 Utilization (in LBAs): 131072 (0GiB) 00:10:35.641 NGUID: 5CBD549A0BEB4CC4A50FECDF195ADA09 00:10:35.641 UUID: 5cbd549a-0beb-4cc4-a50f-ecdf195ada09 00:10:35.641 Thin Provisioning: Not Supported 00:10:35.641 Per-NS Atomic Units: Yes 00:10:35.641 Atomic Boundary Size (Normal): 0 00:10:35.641 Atomic Boundary Size (PFail): 0 00:10:35.641 Atomic Boundary Offset: 0 00:10:35.641 Maximum Single Source Range Length: 65535 00:10:35.641 Maximum Copy Length: 65535 00:10:35.641 Maximum Source Range Count: 1 00:10:35.641 NGUID/EUI64 Never Reused: No 00:10:35.641 Namespace Write Protected: No 00:10:35.641 Number of LBA Formats: 1 00:10:35.641 Current LBA Format: LBA Format #00 00:10:35.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.641 00:10:35.641 23:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:35.899 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.899 [2024-07-15 23:37:10.922966] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:41.166 Initializing NVMe Controllers 00:10:41.166 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:41.166 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:41.166 Initialization complete. Launching workers. 00:10:41.166 ======================================================== 00:10:41.166 Latency(us) 00:10:41.166 Device Information : IOPS MiB/s Average min max 00:10:41.166 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35239.14 137.65 3631.86 1155.57 7291.65 00:10:41.166 ======================================================== 00:10:41.166 Total : 35239.14 137.65 3631.86 1155.57 7291.65 00:10:41.166 00:10:41.166 [2024-07-15 23:37:16.028318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:41.166 23:37:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:41.166 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.166 [2024-07-15 23:37:16.269948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:46.429 Initializing NVMe Controllers 00:10:46.429 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:46.429 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:46.429 Initialization complete. Launching workers. 00:10:46.429 ======================================================== 00:10:46.429 Latency(us) 00:10:46.429 Device Information : IOPS MiB/s Average min max 00:10:46.429 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32719.52 127.81 3911.45 1215.92 9834.59 00:10:46.429 ======================================================== 00:10:46.429 Total : 32719.52 127.81 3911.45 1215.92 9834.59 00:10:46.429 00:10:46.429 [2024-07-15 23:37:21.293974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:46.429 23:37:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:46.429 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.429 [2024-07-15 23:37:21.511786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:51.695 [2024-07-15 23:37:26.650105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:51.695 Initializing NVMe Controllers 00:10:51.695 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:51.695 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:51.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:51.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:51.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:51.695 Initialization complete. Launching workers. 00:10:51.695 Starting thread on core 2 00:10:51.695 Starting thread on core 3 00:10:51.695 Starting thread on core 1 00:10:51.696 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:51.696 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.953 [2024-07-15 23:37:26.959449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:55.231 [2024-07-15 23:37:30.045054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:55.231 Initializing NVMe Controllers 00:10:55.231 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.231 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.231 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:55.231 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:55.231 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:55.231 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:55.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:55.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:55.231 Initialization complete. Launching workers. 00:10:55.231 Starting thread on core 1 with urgent priority queue 00:10:55.231 Starting thread on core 2 with urgent priority queue 00:10:55.231 Starting thread on core 3 with urgent priority queue 00:10:55.231 Starting thread on core 0 with urgent priority queue 00:10:55.231 SPDK bdev Controller (SPDK2 ) core 0: 4490.00 IO/s 22.27 secs/100000 ios 00:10:55.231 SPDK bdev Controller (SPDK2 ) core 1: 4948.67 IO/s 20.21 secs/100000 ios 00:10:55.231 SPDK bdev Controller (SPDK2 ) core 2: 5016.67 IO/s 19.93 secs/100000 ios 00:10:55.231 SPDK bdev Controller (SPDK2 ) core 3: 5136.67 IO/s 19.47 secs/100000 ios 00:10:55.231 ======================================================== 00:10:55.231 00:10:55.231 23:37:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:55.231 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.231 [2024-07-15 23:37:30.352415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:55.488 Initializing NVMe Controllers 00:10:55.488 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.488 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.488 Namespace ID: 1 size: 0GB 00:10:55.488 Initialization complete. 00:10:55.488 INFO: using host memory buffer for IO 00:10:55.488 Hello world! 00:10:55.488 [2024-07-15 23:37:30.365493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:55.488 23:37:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:55.488 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.745 [2024-07-15 23:37:30.653257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:56.678 Initializing NVMe Controllers 00:10:56.678 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:56.678 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:56.678 Initialization complete. Launching workers. 00:10:56.678 submit (in ns) avg, min, max = 5771.6, 3536.7, 4017426.7 00:10:56.678 complete (in ns) avg, min, max = 28893.0, 2067.8, 4016937.8 00:10:56.678 00:10:56.678 Submit histogram 00:10:56.678 ================ 00:10:56.678 Range in us Cumulative Count 00:10:56.678 3.532 - 3.556: 0.2729% ( 36) 00:10:56.678 3.556 - 3.579: 1.4024% ( 149) 00:10:56.678 3.579 - 3.603: 4.3435% ( 388) 00:10:56.678 3.603 - 3.627: 8.5658% ( 557) 00:10:56.678 3.627 - 3.650: 15.6989% ( 941) 00:10:56.678 3.650 - 3.674: 22.6804% ( 921) 00:10:56.678 3.674 - 3.698: 30.5412% ( 1037) 00:10:56.678 3.698 - 3.721: 38.1216% ( 1000) 00:10:56.678 3.721 - 3.745: 44.3299% ( 819) 00:10:56.678 3.745 - 3.769: 50.3184% ( 790) 00:10:56.678 3.769 - 3.793: 54.9651% ( 613) 00:10:56.678 3.793 - 3.816: 59.2025% ( 559) 00:10:56.678 3.816 - 3.840: 62.8411% ( 480) 00:10:56.678 3.840 - 3.864: 66.5782% ( 493) 00:10:56.678 3.864 - 3.887: 70.5276% ( 521) 00:10:56.678 3.887 - 3.911: 74.4618% ( 519) 00:10:56.678 3.911 - 3.935: 78.7144% ( 561) 00:10:56.678 3.935 - 3.959: 82.1407% ( 452) 00:10:56.678 3.959 - 3.982: 85.1122% ( 392) 00:10:56.678 3.982 - 4.006: 87.3863% ( 300) 00:10:56.678 4.006 - 4.030: 89.1374% ( 231) 00:10:56.678 4.030 - 4.053: 90.4791% ( 177) 00:10:56.678 4.053 - 4.077: 91.7981% ( 174) 00:10:56.678 4.077 - 4.101: 92.7759% ( 129) 00:10:56.678 4.101 - 4.124: 93.5264% ( 99) 00:10:56.678 4.124 - 4.148: 94.3147% ( 104) 00:10:56.678 4.148 - 4.172: 94.9970% ( 90) 00:10:56.678 4.172 - 4.196: 95.4745% ( 63) 00:10:56.678 4.196 - 4.219: 95.8308% ( 47) 00:10:56.678 4.219 - 4.243: 96.1795% ( 46) 00:10:56.678 4.243 - 4.267: 96.4448% ( 35) 00:10:56.678 4.267 - 4.290: 96.6192% ( 23) 00:10:56.678 4.290 - 4.314: 96.7632% ( 19) 00:10:56.678 4.314 - 4.338: 96.8390% ( 10) 00:10:56.678 4.338 - 4.361: 96.9148% ( 10) 00:10:56.678 4.361 - 4.385: 97.0209% ( 14) 00:10:56.678 4.385 - 4.409: 97.0512% ( 4) 00:10:56.678 4.409 - 4.433: 97.0891% ( 5) 00:10:56.678 4.433 - 4.456: 97.1195% ( 4) 00:10:56.678 4.456 - 4.480: 97.1422% ( 3) 00:10:56.678 4.480 - 4.504: 97.1574% ( 2) 00:10:56.678 4.504 - 4.527: 97.1725% ( 2) 00:10:56.678 4.527 - 4.551: 97.1801% ( 1) 00:10:56.678 4.551 - 4.575: 97.1877% ( 1) 00:10:56.679 4.575 - 4.599: 97.2029% ( 2) 00:10:56.679 4.599 - 4.622: 97.2104% ( 1) 00:10:56.679 4.693 - 4.717: 97.2180% ( 1) 00:10:56.679 4.717 - 4.741: 97.2256% ( 1) 00:10:56.679 4.741 - 4.764: 97.2408% ( 2) 00:10:56.679 4.788 - 4.812: 97.2787% ( 5) 00:10:56.679 4.812 - 4.836: 97.3090% ( 4) 00:10:56.679 4.836 - 4.859: 97.3620% ( 7) 00:10:56.679 4.859 - 4.883: 97.3772% ( 2) 00:10:56.679 4.883 - 4.907: 97.4227% ( 6) 00:10:56.679 4.907 - 4.930: 97.4606% ( 5) 00:10:56.679 4.930 - 4.954: 97.5061% ( 6) 00:10:56.679 4.954 - 4.978: 97.5440% ( 5) 00:10:56.679 4.978 - 5.001: 97.5970% ( 7) 00:10:56.679 5.001 - 5.025: 97.6425% ( 6) 00:10:56.679 5.025 - 5.049: 97.6653% ( 3) 00:10:56.679 5.049 - 5.073: 97.7107% ( 6) 00:10:56.679 5.073 - 5.096: 97.7486% ( 5) 00:10:56.679 5.120 - 5.144: 97.7865% ( 5) 00:10:56.679 5.144 - 5.167: 97.8093% ( 3) 00:10:56.679 5.191 - 5.215: 97.8320% ( 3) 00:10:56.679 5.215 - 5.239: 97.8927% ( 8) 00:10:56.679 5.239 - 5.262: 97.9306% ( 5) 00:10:56.679 5.262 - 5.286: 97.9457% ( 2) 00:10:56.679 5.286 - 5.310: 97.9609% ( 2) 00:10:56.679 5.310 - 5.333: 97.9836% ( 3) 00:10:56.679 5.333 - 5.357: 97.9988% ( 2) 00:10:56.679 5.357 - 5.381: 98.0064% ( 1) 00:10:56.679 5.381 - 5.404: 98.0139% ( 1) 00:10:56.679 5.404 - 5.428: 98.0215% ( 1) 00:10:56.679 5.452 - 5.476: 98.0291% ( 1) 00:10:56.679 5.523 - 5.547: 98.0367% ( 1) 00:10:56.679 5.594 - 5.618: 98.0443% ( 1) 00:10:56.679 5.641 - 5.665: 98.0594% ( 2) 00:10:56.679 5.713 - 5.736: 98.0670% ( 1) 00:10:56.679 5.902 - 5.926: 98.0746% ( 1) 00:10:56.679 5.950 - 5.973: 98.0973% ( 3) 00:10:56.679 5.973 - 5.997: 98.1125% ( 2) 00:10:56.679 5.997 - 6.021: 98.1277% ( 2) 00:10:56.679 6.068 - 6.116: 98.1428% ( 2) 00:10:56.679 6.116 - 6.163: 98.1504% ( 1) 00:10:56.679 6.210 - 6.258: 98.1580% ( 1) 00:10:56.679 6.258 - 6.305: 98.1656% ( 1) 00:10:56.679 6.305 - 6.353: 98.1807% ( 2) 00:10:56.679 6.353 - 6.400: 98.1959% ( 2) 00:10:56.679 6.400 - 6.447: 98.2035% ( 1) 00:10:56.679 6.590 - 6.637: 98.2186% ( 2) 00:10:56.679 6.637 - 6.684: 98.2262% ( 1) 00:10:56.679 6.732 - 6.779: 98.2338% ( 1) 00:10:56.679 6.779 - 6.827: 98.2414% ( 1) 00:10:56.679 7.016 - 7.064: 98.2489% ( 1) 00:10:56.679 7.064 - 7.111: 98.2565% ( 1) 00:10:56.679 7.301 - 7.348: 98.2641% ( 1) 00:10:56.679 7.396 - 7.443: 98.2717% ( 1) 00:10:56.679 7.443 - 7.490: 98.2793% ( 1) 00:10:56.679 7.490 - 7.538: 98.2868% ( 1) 00:10:56.679 7.633 - 7.680: 98.2944% ( 1) 00:10:56.679 7.680 - 7.727: 98.3020% ( 1) 00:10:56.679 7.727 - 7.775: 98.3096% ( 1) 00:10:56.679 7.822 - 7.870: 98.3247% ( 2) 00:10:56.679 7.917 - 7.964: 98.3399% ( 2) 00:10:56.679 7.964 - 8.012: 98.3475% ( 1) 00:10:56.679 8.012 - 8.059: 98.3626% ( 2) 00:10:56.679 8.296 - 8.344: 98.3702% ( 1) 00:10:56.679 8.391 - 8.439: 98.3854% ( 2) 00:10:56.679 8.439 - 8.486: 98.4157% ( 4) 00:10:56.679 8.581 - 8.628: 98.4233% ( 1) 00:10:56.679 8.628 - 8.676: 98.4309% ( 1) 00:10:56.679 8.676 - 8.723: 98.4384% ( 1) 00:10:56.679 8.723 - 8.770: 98.4460% ( 1) 00:10:56.679 8.770 - 8.818: 98.4536% ( 1) 00:10:56.679 8.960 - 9.007: 98.4688% ( 2) 00:10:56.679 9.007 - 9.055: 98.4763% ( 1) 00:10:56.679 9.150 - 9.197: 98.4839% ( 1) 00:10:56.679 9.244 - 9.292: 98.4915% ( 1) 00:10:56.679 9.292 - 9.339: 98.4991% ( 1) 00:10:56.679 9.387 - 9.434: 98.5294% ( 4) 00:10:56.679 9.434 - 9.481: 98.5370% ( 1) 00:10:56.679 9.529 - 9.576: 98.5522% ( 2) 00:10:56.679 9.576 - 9.624: 98.5597% ( 1) 00:10:56.679 9.624 - 9.671: 98.5673% ( 1) 00:10:56.679 9.719 - 9.766: 98.5825% ( 2) 00:10:56.679 9.956 - 10.003: 98.5901% ( 1) 00:10:56.679 10.145 - 10.193: 98.6052% ( 2) 00:10:56.679 10.193 - 10.240: 98.6128% ( 1) 00:10:56.679 10.335 - 10.382: 98.6204% ( 1) 00:10:56.679 10.524 - 10.572: 98.6280% ( 1) 00:10:56.679 10.714 - 10.761: 98.6355% ( 1) 00:10:56.679 10.761 - 10.809: 98.6507% ( 2) 00:10:56.679 10.809 - 10.856: 98.6583% ( 1) 00:10:56.679 11.093 - 11.141: 98.6659% ( 1) 00:10:56.679 11.141 - 11.188: 98.6734% ( 1) 00:10:56.679 11.378 - 11.425: 98.6810% ( 1) 00:10:56.679 11.425 - 11.473: 98.6886% ( 1) 00:10:56.679 11.662 - 11.710: 98.6962% ( 1) 00:10:56.679 11.710 - 11.757: 98.7038% ( 1) 00:10:56.679 11.757 - 11.804: 98.7113% ( 1) 00:10:56.679 11.804 - 11.852: 98.7265% ( 2) 00:10:56.679 11.852 - 11.899: 98.7341% ( 1) 00:10:56.679 11.947 - 11.994: 98.7417% ( 1) 00:10:56.679 12.136 - 12.231: 98.7568% ( 2) 00:10:56.679 12.231 - 12.326: 98.7720% ( 2) 00:10:56.679 12.326 - 12.421: 98.7796% ( 1) 00:10:56.679 12.705 - 12.800: 98.8023% ( 3) 00:10:56.679 12.895 - 12.990: 98.8175% ( 2) 00:10:56.679 12.990 - 13.084: 98.8326% ( 2) 00:10:56.679 13.084 - 13.179: 98.8402% ( 1) 00:10:56.679 13.274 - 13.369: 98.8554% ( 2) 00:10:56.679 13.369 - 13.464: 98.8705% ( 2) 00:10:56.679 13.464 - 13.559: 98.8781% ( 1) 00:10:56.679 13.653 - 13.748: 98.9008% ( 3) 00:10:56.679 13.843 - 13.938: 98.9084% ( 1) 00:10:56.679 13.938 - 14.033: 98.9160% ( 1) 00:10:56.679 14.127 - 14.222: 98.9236% ( 1) 00:10:56.679 14.317 - 14.412: 98.9312% ( 1) 00:10:56.679 14.412 - 14.507: 98.9388% ( 1) 00:10:56.679 14.696 - 14.791: 98.9463% ( 1) 00:10:56.679 14.791 - 14.886: 98.9615% ( 2) 00:10:56.679 14.886 - 14.981: 98.9691% ( 1) 00:10:56.679 14.981 - 15.076: 98.9767% ( 1) 00:10:56.679 15.076 - 15.170: 98.9918% ( 2) 00:10:56.679 16.024 - 16.119: 98.9994% ( 1) 00:10:56.679 17.161 - 17.256: 99.0221% ( 3) 00:10:56.679 17.256 - 17.351: 99.0297% ( 1) 00:10:56.679 17.351 - 17.446: 99.0525% ( 3) 00:10:56.679 17.446 - 17.541: 99.0752% ( 3) 00:10:56.679 17.541 - 17.636: 99.0904% ( 2) 00:10:56.679 17.636 - 17.730: 99.1207% ( 4) 00:10:56.679 17.730 - 17.825: 99.1813% ( 8) 00:10:56.679 17.825 - 17.920: 99.2495% ( 9) 00:10:56.679 17.920 - 18.015: 99.3178% ( 9) 00:10:56.679 18.015 - 18.110: 99.4163% ( 13) 00:10:56.679 18.110 - 18.204: 99.4770% ( 8) 00:10:56.680 18.204 - 18.299: 99.5528% ( 10) 00:10:56.680 18.299 - 18.394: 99.6286% ( 10) 00:10:56.680 18.394 - 18.489: 99.6816% ( 7) 00:10:56.680 18.489 - 18.584: 99.7195% ( 5) 00:10:56.680 18.584 - 18.679: 99.7574% ( 5) 00:10:56.680 18.679 - 18.773: 99.7878% ( 4) 00:10:56.680 18.773 - 18.868: 99.8105% ( 3) 00:10:56.680 18.868 - 18.963: 99.8560% ( 6) 00:10:56.680 18.963 - 19.058: 99.8636% ( 1) 00:10:56.680 19.058 - 19.153: 99.8863% ( 3) 00:10:56.680 19.342 - 19.437: 99.9090% ( 3) 00:10:56.680 19.437 - 19.532: 99.9166% ( 1) 00:10:56.680 20.385 - 20.480: 99.9242% ( 1) 00:10:56.680 20.764 - 20.859: 99.9318% ( 1) 00:10:56.680 23.988 - 24.083: 99.9394% ( 1) 00:10:56.680 28.444 - 28.634: 99.9469% ( 1) 00:10:56.680 28.824 - 29.013: 99.9545% ( 1) 00:10:56.680 3252.527 - 3276.800: 99.9621% ( 1) 00:10:56.680 3980.705 - 4004.978: 99.9848% ( 3) 00:10:56.680 4004.978 - 4029.250: 100.0000% ( 2) 00:10:56.680 00:10:56.680 Complete histogram 00:10:56.680 ================== 00:10:56.680 Range in us Cumulative Count 00:10:56.680 2.062 - 2.074: 1.1371% ( 150) 00:10:56.680 2.074 - 2.086: 33.7174% ( 4298) 00:10:56.680 2.086 - 2.098: 48.4612% ( 1945) 00:10:56.680 2.098 - 2.110: 51.4251% ( 391) 00:10:56.680 2.110 - 2.121: 58.3005% ( 907) 00:10:56.680 2.121 - 2.133: 61.0673% ( 365) 00:10:56.680 2.133 - 2.145: 64.3496% ( 433) 00:10:56.680 2.145 - 2.157: 74.1965% ( 1299) 00:10:56.680 2.157 - 2.169: 76.5540% ( 311) 00:10:56.680 2.169 - 2.181: 77.9563% ( 185) 00:10:56.680 2.181 - 2.193: 80.4427% ( 328) 00:10:56.680 2.193 - 2.204: 81.6783% ( 163) 00:10:56.680 2.204 - 2.216: 82.6410% ( 127) 00:10:56.680 2.216 - 2.228: 87.7426% ( 673) 00:10:56.680 2.228 - 2.240: 90.3502% ( 344) 00:10:56.680 2.240 - 2.252: 91.5706% ( 161) 00:10:56.680 2.252 - 2.264: 93.1701% ( 211) 00:10:56.680 2.264 - 2.276: 93.7917% ( 82) 00:10:56.680 2.276 - 2.287: 94.1328% ( 45) 00:10:56.680 2.287 - 2.299: 94.6104% ( 63) 00:10:56.680 2.299 - 2.311: 95.0273% ( 55) 00:10:56.680 2.311 - 2.323: 95.5049% ( 63) 00:10:56.680 2.323 - 2.335: 95.6034% ( 13) 00:10:56.680 2.335 - 2.347: 95.6489% ( 6) 00:10:56.680 2.347 - 2.359: 95.7702% ( 16) 00:10:56.680 2.359 - 2.370: 95.8232% ( 7) 00:10:56.680 2.370 - 2.382: 96.0052% ( 24) 00:10:56.680 2.382 - 2.394: 96.1719% ( 22) 00:10:56.680 2.394 - 2.406: 96.3993% ( 30) 00:10:56.680 2.406 - 2.418: 96.5509% ( 20) 00:10:56.680 2.418 - 2.430: 96.6798% ( 17) 00:10:56.680 2.430 - 2.441: 96.8693% ( 25) 00:10:56.680 2.441 - 2.453: 96.9982% ( 17) 00:10:56.680 2.453 - 2.465: 97.1498% ( 20) 00:10:56.680 2.465 - 2.477: 97.2711% ( 16) 00:10:56.680 2.477 - 2.489: 97.4227% ( 20) 00:10:56.680 2.489 - 2.501: 97.5288% ( 14) 00:10:56.680 2.501 - 2.513: 97.6728% ( 19) 00:10:56.680 2.513 - 2.524: 97.7865% ( 15) 00:10:56.680 2.524 - 2.536: 97.8623% ( 10) 00:10:56.680 2.536 - 2.548: 97.9381% ( 10) 00:10:56.680 2.548 - 2.560: 97.9760% ( 5) 00:10:56.680 2.560 - 2.572: 98.0139% ( 5) 00:10:56.680 2.572 - 2.584: 98.0670% ( 7) 00:10:56.680 2.584 - 2.596: 98.1125% ( 6) 00:10:56.680 2.596 - 2.607: 98.1277% ( 2) 00:10:56.680 2.607 - 2.619: 98.1428% ( 2) 00:10:56.680 2.619 - 2.631: 98.1580% ( 2) 00:10:56.680 2.631 - 2.643: 98.1731% ( 2) 00:10:56.680 2.643 - 2.655: 98.1807% ( 1) 00:10:56.680 2.655 - 2.667: 98.1959% ( 2) 00:10:56.680 2.667 - 2.679: 98.2035% ( 1) 00:10:56.680 2.679 - 2.690: 98.2338% ( 4) 00:10:56.680 2.690 - 2.702: 98.2565% ( 3) 00:10:56.680 2.702 - 2.714: 9[2024-07-15 23:37:31.754730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:56.680 8.2717% ( 2) 00:10:56.680 2.750 - 2.761: 98.2793% ( 1) 00:10:56.680 2.785 - 2.797: 98.2868% ( 1) 00:10:56.680 2.880 - 2.892: 98.2944% ( 1) 00:10:56.680 3.556 - 3.579: 98.3020% ( 1) 00:10:56.680 3.579 - 3.603: 98.3323% ( 4) 00:10:56.680 3.627 - 3.650: 98.3626% ( 4) 00:10:56.680 3.650 - 3.674: 98.3702% ( 1) 00:10:56.680 3.698 - 3.721: 98.3854% ( 2) 00:10:56.680 3.745 - 3.769: 98.3930% ( 1) 00:10:56.680 3.769 - 3.793: 98.4005% ( 1) 00:10:56.680 3.816 - 3.840: 98.4081% ( 1) 00:10:56.680 3.864 - 3.887: 98.4157% ( 1) 00:10:56.680 3.887 - 3.911: 98.4309% ( 2) 00:10:56.680 3.911 - 3.935: 98.4384% ( 1) 00:10:56.680 3.959 - 3.982: 98.4460% ( 1) 00:10:56.680 4.030 - 4.053: 98.4612% ( 2) 00:10:56.680 4.124 - 4.148: 98.4688% ( 1) 00:10:56.680 4.196 - 4.219: 98.4763% ( 1) 00:10:56.680 4.219 - 4.243: 98.4839% ( 1) 00:10:56.680 4.456 - 4.480: 98.4915% ( 1) 00:10:56.680 4.883 - 4.907: 98.4991% ( 1) 00:10:56.680 5.618 - 5.641: 98.5143% ( 2) 00:10:56.680 5.665 - 5.689: 98.5218% ( 1) 00:10:56.680 5.902 - 5.926: 98.5294% ( 1) 00:10:56.680 5.950 - 5.973: 98.5370% ( 1) 00:10:56.680 6.400 - 6.447: 98.5446% ( 1) 00:10:56.680 6.542 - 6.590: 98.5522% ( 1) 00:10:56.680 6.732 - 6.779: 98.5597% ( 1) 00:10:56.680 6.779 - 6.827: 98.5673% ( 1) 00:10:56.680 6.827 - 6.874: 98.5749% ( 1) 00:10:56.680 6.874 - 6.921: 98.5825% ( 1) 00:10:56.680 6.921 - 6.969: 98.5901% ( 1) 00:10:56.680 7.348 - 7.396: 98.5976% ( 1) 00:10:56.680 7.396 - 7.443: 98.6128% ( 2) 00:10:56.680 7.443 - 7.490: 98.6204% ( 1) 00:10:56.680 7.775 - 7.822: 98.6280% ( 1) 00:10:56.680 8.533 - 8.581: 98.6355% ( 1) 00:10:56.680 8.676 - 8.723: 98.6507% ( 2) 00:10:56.680 8.723 - 8.770: 98.6583% ( 1) 00:10:56.680 9.529 - 9.576: 98.6659% ( 1) 00:10:56.680 15.455 - 15.550: 98.6734% ( 1) 00:10:56.680 15.550 - 15.644: 98.6810% ( 1) 00:10:56.680 15.644 - 15.739: 98.6962% ( 2) 00:10:56.680 15.739 - 15.834: 98.7113% ( 2) 00:10:56.680 15.834 - 15.929: 98.7492% ( 5) 00:10:56.680 15.929 - 16.024: 98.7796% ( 4) 00:10:56.680 16.024 - 16.119: 98.8099% ( 4) 00:10:56.680 16.119 - 16.213: 98.8629% ( 7) 00:10:56.680 16.213 - 16.308: 98.9084% ( 6) 00:10:56.680 16.308 - 16.403: 98.9615% ( 7) 00:10:56.680 16.403 - 16.498: 98.9994% ( 5) 00:10:56.680 16.498 - 16.593: 99.0600% ( 8) 00:10:56.680 16.593 - 16.687: 99.0979% ( 5) 00:10:56.680 16.687 - 16.782: 99.1510% ( 7) 00:10:56.680 16.782 - 16.877: 99.2192% ( 9) 00:10:56.680 16.877 - 16.972: 99.2268% ( 1) 00:10:56.680 16.972 - 17.067: 99.2344% ( 1) 00:10:56.681 17.067 - 17.161: 99.2495% ( 2) 00:10:56.681 17.351 - 17.446: 99.2799% ( 4) 00:10:56.681 17.636 - 17.730: 99.2874% ( 1) 00:10:56.681 17.825 - 17.920: 99.2950% ( 1) 00:10:56.681 18.489 - 18.584: 99.3026% ( 1) 00:10:56.681 18.584 - 18.679: 99.3102% ( 1) 00:10:56.681 19.247 - 19.342: 99.3253% ( 2) 00:10:56.681 22.187 - 22.281: 99.3329% ( 1) 00:10:56.681 3519.526 - 3543.799: 99.3405% ( 1) 00:10:56.681 3980.705 - 4004.978: 99.9090% ( 75) 00:10:56.681 4004.978 - 4029.250: 100.0000% ( 12) 00:10:56.681 00:10:56.940 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:56.940 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:56.940 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:56.940 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:56.940 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:56.940 [ 00:10:56.940 { 00:10:56.940 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:56.940 "subtype": "Discovery", 00:10:56.940 "listen_addresses": [], 00:10:56.940 "allow_any_host": true, 00:10:56.940 "hosts": [] 00:10:56.940 }, 00:10:56.940 { 00:10:56.940 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:56.940 "subtype": "NVMe", 00:10:56.940 "listen_addresses": [ 00:10:56.940 { 00:10:56.940 "trtype": "VFIOUSER", 00:10:56.940 "adrfam": "IPv4", 00:10:56.940 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:56.940 "trsvcid": "0" 00:10:56.940 } 00:10:56.940 ], 00:10:56.940 "allow_any_host": true, 00:10:56.940 "hosts": [], 00:10:56.940 "serial_number": "SPDK1", 00:10:56.940 "model_number": "SPDK bdev Controller", 00:10:56.940 "max_namespaces": 32, 00:10:56.940 "min_cntlid": 1, 00:10:56.940 "max_cntlid": 65519, 00:10:56.940 "namespaces": [ 00:10:56.940 { 00:10:56.940 "nsid": 1, 00:10:56.940 "bdev_name": "Malloc1", 00:10:56.940 "name": "Malloc1", 00:10:56.940 "nguid": "96507A39241148079E8441D61BD7C934", 00:10:56.940 "uuid": "96507a39-2411-4807-9e84-41d61bd7c934" 00:10:56.940 }, 00:10:56.940 { 00:10:56.940 "nsid": 2, 00:10:56.940 "bdev_name": "Malloc3", 00:10:56.940 "name": "Malloc3", 00:10:56.940 "nguid": "D812FCDAAF924FF28FA15D0767F3C7A5", 00:10:56.940 "uuid": "d812fcda-af92-4ff2-8fa1-5d0767f3c7a5" 00:10:56.940 } 00:10:56.940 ] 00:10:56.940 }, 00:10:56.940 { 00:10:56.940 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:56.940 "subtype": "NVMe", 00:10:56.940 "listen_addresses": [ 00:10:56.940 { 00:10:56.940 "trtype": "VFIOUSER", 00:10:56.940 "adrfam": "IPv4", 00:10:56.940 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:56.940 "trsvcid": "0" 00:10:56.940 } 00:10:56.940 ], 00:10:56.940 "allow_any_host": true, 00:10:56.940 "hosts": [], 00:10:56.940 "serial_number": "SPDK2", 00:10:56.940 "model_number": "SPDK bdev Controller", 00:10:56.940 "max_namespaces": 32, 00:10:56.940 "min_cntlid": 1, 00:10:56.940 "max_cntlid": 65519, 00:10:56.940 "namespaces": [ 00:10:56.940 { 00:10:56.940 "nsid": 1, 00:10:56.940 "bdev_name": "Malloc2", 00:10:56.940 "name": "Malloc2", 00:10:56.940 "nguid": "5CBD549A0BEB4CC4A50FECDF195ADA09", 00:10:56.940 "uuid": "5cbd549a-0beb-4cc4-a50f-ecdf195ada09" 00:10:56.940 } 00:10:56.940 ] 00:10:56.940 } 00:10:56.940 ] 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3733224 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:56.940 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:57.198 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.198 [2024-07-15 23:37:32.202497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:57.198 Malloc4 00:10:57.455 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:57.455 [2024-07-15 23:37:32.550907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:57.455 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:57.713 Asynchronous Event Request test 00:10:57.713 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:57.713 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:57.713 Registering asynchronous event callbacks... 00:10:57.713 Starting namespace attribute notice tests for all controllers... 00:10:57.713 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:57.713 aer_cb - Changed Namespace 00:10:57.713 Cleaning up... 00:10:57.713 [ 00:10:57.713 { 00:10:57.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.713 "subtype": "Discovery", 00:10:57.713 "listen_addresses": [], 00:10:57.713 "allow_any_host": true, 00:10:57.713 "hosts": [] 00:10:57.713 }, 00:10:57.713 { 00:10:57.713 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:57.713 "subtype": "NVMe", 00:10:57.713 "listen_addresses": [ 00:10:57.713 { 00:10:57.713 "trtype": "VFIOUSER", 00:10:57.713 "adrfam": "IPv4", 00:10:57.713 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:57.713 "trsvcid": "0" 00:10:57.713 } 00:10:57.713 ], 00:10:57.713 "allow_any_host": true, 00:10:57.713 "hosts": [], 00:10:57.713 "serial_number": "SPDK1", 00:10:57.713 "model_number": "SPDK bdev Controller", 00:10:57.713 "max_namespaces": 32, 00:10:57.713 "min_cntlid": 1, 00:10:57.713 "max_cntlid": 65519, 00:10:57.713 "namespaces": [ 00:10:57.713 { 00:10:57.713 "nsid": 1, 00:10:57.713 "bdev_name": "Malloc1", 00:10:57.713 "name": "Malloc1", 00:10:57.713 "nguid": "96507A39241148079E8441D61BD7C934", 00:10:57.713 "uuid": "96507a39-2411-4807-9e84-41d61bd7c934" 00:10:57.713 }, 00:10:57.713 { 00:10:57.713 "nsid": 2, 00:10:57.713 "bdev_name": "Malloc3", 00:10:57.713 "name": "Malloc3", 00:10:57.713 "nguid": "D812FCDAAF924FF28FA15D0767F3C7A5", 00:10:57.713 "uuid": "d812fcda-af92-4ff2-8fa1-5d0767f3c7a5" 00:10:57.713 } 00:10:57.713 ] 00:10:57.713 }, 00:10:57.713 { 00:10:57.713 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:57.713 "subtype": "NVMe", 00:10:57.713 "listen_addresses": [ 00:10:57.713 { 00:10:57.713 "trtype": "VFIOUSER", 00:10:57.713 "adrfam": "IPv4", 00:10:57.713 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:57.713 "trsvcid": "0" 00:10:57.713 } 00:10:57.713 ], 00:10:57.713 "allow_any_host": true, 00:10:57.713 "hosts": [], 00:10:57.713 "serial_number": "SPDK2", 00:10:57.713 "model_number": "SPDK bdev Controller", 00:10:57.713 "max_namespaces": 32, 00:10:57.713 "min_cntlid": 1, 00:10:57.713 "max_cntlid": 65519, 00:10:57.713 "namespaces": [ 00:10:57.713 { 00:10:57.713 "nsid": 1, 00:10:57.713 "bdev_name": "Malloc2", 00:10:57.713 "name": "Malloc2", 00:10:57.714 "nguid": "5CBD549A0BEB4CC4A50FECDF195ADA09", 00:10:57.714 "uuid": "5cbd549a-0beb-4cc4-a50f-ecdf195ada09" 00:10:57.714 }, 00:10:57.714 { 00:10:57.714 "nsid": 2, 00:10:57.714 "bdev_name": "Malloc4", 00:10:57.714 "name": "Malloc4", 00:10:57.714 "nguid": "8E499B11DD5B40D3A3A0F8D16B13A744", 00:10:57.714 "uuid": "8e499b11-dd5b-40d3-a3a0-f8d16b13a744" 00:10:57.714 } 00:10:57.714 ] 00:10:57.714 } 00:10:57.714 ] 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3733224 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3727632 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3727632 ']' 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3727632 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:57.714 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727632 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727632' 00:10:57.972 killing process with pid 3727632 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3727632 00:10:57.972 23:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3727632 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3733376 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3733376' 00:10:58.279 Process pid: 3733376 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3733376 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3733376 ']' 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.279 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:58.279 [2024-07-15 23:37:33.241666] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:58.279 [2024-07-15 23:37:33.242651] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:10:58.279 [2024-07-15 23:37:33.242708] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.279 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.279 [2024-07-15 23:37:33.300127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.536 [2024-07-15 23:37:33.402366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.536 [2024-07-15 23:37:33.402413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.536 [2024-07-15 23:37:33.402440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.536 [2024-07-15 23:37:33.402451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.536 [2024-07-15 23:37:33.402460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.536 [2024-07-15 23:37:33.402544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.536 [2024-07-15 23:37:33.402650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.536 [2024-07-15 23:37:33.402724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.536 [2024-07-15 23:37:33.402726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.536 [2024-07-15 23:37:33.495708] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:58.536 [2024-07-15 23:37:33.495948] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:58.536 [2024-07-15 23:37:33.496244] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:58.536 [2024-07-15 23:37:33.496865] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:58.536 [2024-07-15 23:37:33.497131] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:58.536 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.536 23:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:58.536 23:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:59.465 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:59.722 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:59.722 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:59.722 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:59.722 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:59.722 23:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:59.979 Malloc1 00:10:59.979 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:00.237 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:00.494 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:01.059 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:01.059 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:01.059 23:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:01.059 Malloc2 00:11:01.316 23:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:01.574 23:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:01.831 23:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3733376 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3733376 ']' 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3733376 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3733376 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3733376' 00:11:02.089 killing process with pid 3733376 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3733376 00:11:02.089 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3733376 00:11:02.347 23:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:02.347 23:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:02.347 00:11:02.347 real 0m52.787s 00:11:02.347 user 3m28.553s 00:11:02.347 sys 0m4.356s 00:11:02.347 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.347 23:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:02.347 ************************************ 00:11:02.347 END TEST nvmf_vfio_user 00:11:02.347 ************************************ 00:11:02.347 23:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:02.347 23:37:37 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:02.348 23:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.348 23:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.348 23:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.348 ************************************ 00:11:02.348 START TEST nvmf_vfio_user_nvme_compliance 00:11:02.348 ************************************ 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:02.348 * Looking for test storage... 00:11:02.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3733862 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3733862' 00:11:02.348 Process pid: 3733862 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3733862 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3733862 ']' 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.348 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:02.606 [2024-07-15 23:37:37.488719] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:11:02.606 [2024-07-15 23:37:37.488809] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.606 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.606 [2024-07-15 23:37:37.545714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.606 [2024-07-15 23:37:37.655140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.606 [2024-07-15 23:37:37.655200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.606 [2024-07-15 23:37:37.655228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.606 [2024-07-15 23:37:37.655239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.606 [2024-07-15 23:37:37.655248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.606 [2024-07-15 23:37:37.655395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.606 [2024-07-15 23:37:37.655455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.606 [2024-07-15 23:37:37.655458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.864 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.864 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:02.864 23:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 malloc0 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.819 23:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:03.819 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.076 00:11:04.076 00:11:04.076 CUnit - A unit testing framework for C - Version 2.1-3 00:11:04.076 http://cunit.sourceforge.net/ 00:11:04.076 00:11:04.076 00:11:04.076 Suite: nvme_compliance 00:11:04.076 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 23:37:39.005795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.076 [2024-07-15 23:37:39.007273] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:04.076 [2024-07-15 23:37:39.007298] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:04.076 [2024-07-15 23:37:39.007325] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:04.076 [2024-07-15 23:37:39.008813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.076 passed 00:11:04.076 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 23:37:39.094442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.076 [2024-07-15 23:37:39.097460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.076 passed 00:11:04.076 Test: admin_identify_ns ...[2024-07-15 23:37:39.185495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.333 [2024-07-15 23:37:39.244976] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:04.333 [2024-07-15 23:37:39.252973] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:04.333 [2024-07-15 23:37:39.274112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.333 passed 00:11:04.333 Test: admin_get_features_mandatory_features ...[2024-07-15 23:37:39.357821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.333 [2024-07-15 23:37:39.360842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.333 passed 00:11:04.333 Test: admin_get_features_optional_features ...[2024-07-15 23:37:39.445422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.333 [2024-07-15 23:37:39.448445] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.590 passed 00:11:04.590 Test: admin_set_features_number_of_queues ...[2024-07-15 23:37:39.530619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.590 [2024-07-15 23:37:39.639077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.590 passed 00:11:04.847 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 23:37:39.722812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.847 [2024-07-15 23:37:39.725834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.847 passed 00:11:04.847 Test: admin_get_log_page_with_lpo ...[2024-07-15 23:37:39.806774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.847 [2024-07-15 23:37:39.877985] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:04.847 [2024-07-15 23:37:39.891047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:04.847 passed 00:11:04.847 Test: fabric_property_get ...[2024-07-15 23:37:39.970647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:04.847 [2024-07-15 23:37:39.971971] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:05.105 [2024-07-15 23:37:39.973670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.105 passed 00:11:05.105 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 23:37:40.060292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.105 [2024-07-15 23:37:40.061681] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:05.105 [2024-07-15 23:37:40.063322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.105 passed 00:11:05.105 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 23:37:40.150540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.361 [2024-07-15 23:37:40.233965] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:05.361 [2024-07-15 23:37:40.248051] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:05.361 [2024-07-15 23:37:40.253188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.361 passed 00:11:05.361 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 23:37:40.334800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.361 [2024-07-15 23:37:40.336147] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:05.361 [2024-07-15 23:37:40.337827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.361 passed 00:11:05.361 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 23:37:40.419108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.618 [2024-07-15 23:37:40.498970] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:05.618 [2024-07-15 23:37:40.522965] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:05.618 [2024-07-15 23:37:40.528085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.618 passed 00:11:05.618 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 23:37:40.611809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.618 [2024-07-15 23:37:40.613140] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:05.618 [2024-07-15 23:37:40.613195] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:05.618 [2024-07-15 23:37:40.614831] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.618 passed 00:11:05.618 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 23:37:40.696091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.876 [2024-07-15 23:37:40.790969] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:05.876 [2024-07-15 23:37:40.798979] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:05.876 [2024-07-15 23:37:40.806967] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:05.876 [2024-07-15 23:37:40.814964] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:05.876 [2024-07-15 23:37:40.844094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.876 passed 00:11:05.876 Test: admin_create_io_sq_verify_pc ...[2024-07-15 23:37:40.926233] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.876 [2024-07-15 23:37:40.942979] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:05.876 [2024-07-15 23:37:40.960603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.876 passed 00:11:06.133 Test: admin_create_io_qp_max_qps ...[2024-07-15 23:37:41.042189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:07.064 [2024-07-15 23:37:42.136987] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:07.627 [2024-07-15 23:37:42.528111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:07.627 passed 00:11:07.627 Test: admin_create_io_sq_shared_cq ...[2024-07-15 23:37:42.609572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:07.628 [2024-07-15 23:37:42.740980] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:07.885 [2024-07-15 23:37:42.778050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:07.885 passed 00:11:07.885 00:11:07.885 Run Summary: Type Total Ran Passed Failed Inactive 00:11:07.885 suites 1 1 n/a 0 0 00:11:07.885 tests 18 18 18 0 0 00:11:07.885 asserts 360 360 360 0 n/a 00:11:07.885 00:11:07.885 Elapsed time = 1.565 seconds 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3733862 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3733862 ']' 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3733862 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3733862 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3733862' 00:11:07.885 killing process with pid 3733862 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3733862 00:11:07.885 23:37:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3733862 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:08.143 00:11:08.143 real 0m5.784s 00:11:08.143 user 0m16.185s 00:11:08.143 sys 0m0.559s 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:08.143 ************************************ 00:11:08.143 END TEST nvmf_vfio_user_nvme_compliance 00:11:08.143 ************************************ 00:11:08.143 23:37:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:08.143 23:37:43 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:08.143 23:37:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:08.143 23:37:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.143 23:37:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.143 ************************************ 00:11:08.143 START TEST nvmf_vfio_user_fuzz 00:11:08.143 ************************************ 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:08.143 * Looking for test storage... 00:11:08.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.143 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3734603 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3734603' 00:11:08.401 Process pid: 3734603 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3734603 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3734603 ']' 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.401 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:08.658 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.658 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:08.658 23:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 malloc0 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:09.596 23:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:41.685 Fuzzing completed. Shutting down the fuzz application 00:11:41.685 00:11:41.685 Dumping successful admin opcodes: 00:11:41.685 8, 9, 10, 24, 00:11:41.685 Dumping successful io opcodes: 00:11:41.685 0, 00:11:41.685 NS: 0x200003a1ef00 I/O qp, Total commands completed: 679649, total successful commands: 2644, random_seed: 2789007488 00:11:41.685 NS: 0x200003a1ef00 admin qp, Total commands completed: 95568, total successful commands: 774, random_seed: 1700571968 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3734603 ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3734603' 00:11:41.685 killing process with pid 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3734603 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:41.685 00:11:41.685 real 0m32.268s 00:11:41.685 user 0m30.370s 00:11:41.685 sys 0m29.787s 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.685 23:38:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 ************************************ 00:11:41.685 END TEST nvmf_vfio_user_fuzz 00:11:41.685 ************************************ 00:11:41.685 23:38:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:41.685 23:38:15 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:41.685 23:38:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.685 23:38:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.685 23:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.685 ************************************ 00:11:41.685 START TEST nvmf_host_management 00:11:41.685 ************************************ 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:41.685 * Looking for test storage... 00:11:41.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.685 23:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.621 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:42.622 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:42.622 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:42.622 Found net devices under 0000:09:00.0: cvl_0_0 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:42.622 Found net devices under 0000:09:00.1: cvl_0_1 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:42.622 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:42.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:11:42.881 00:11:42.881 --- 10.0.0.2 ping statistics --- 00:11:42.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.881 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:42.881 00:11:42.881 --- 10.0.0.1 ping statistics --- 00:11:42.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.881 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3740039 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3740039 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3740039 ']' 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.881 23:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.881 [2024-07-15 23:38:17.837431] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:11:42.881 [2024-07-15 23:38:17.837515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.881 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.881 [2024-07-15 23:38:17.901387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.139 [2024-07-15 23:38:18.013971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.139 [2024-07-15 23:38:18.014019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.139 [2024-07-15 23:38:18.014033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.139 [2024-07-15 23:38:18.014045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.139 [2024-07-15 23:38:18.014055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.139 [2024-07-15 23:38:18.014157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.139 [2024-07-15 23:38:18.014192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.139 [2024-07-15 23:38:18.014218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:43.139 [2024-07-15 23:38:18.014220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.139 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 [2024-07-15 23:38:18.173861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 Malloc0 00:11:43.140 [2024-07-15 23:38:18.238614] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.140 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3740203 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3740203 /var/tmp/bdevperf.sock 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3740203 ']' 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:43.398 { 00:11:43.398 "params": { 00:11:43.398 "name": "Nvme$subsystem", 00:11:43.398 "trtype": "$TEST_TRANSPORT", 00:11:43.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:43.398 "adrfam": "ipv4", 00:11:43.398 "trsvcid": "$NVMF_PORT", 00:11:43.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:43.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:43.398 "hdgst": ${hdgst:-false}, 00:11:43.398 "ddgst": ${ddgst:-false} 00:11:43.398 }, 00:11:43.398 "method": "bdev_nvme_attach_controller" 00:11:43.398 } 00:11:43.398 EOF 00:11:43.398 )") 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:43.398 23:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:43.398 "params": { 00:11:43.398 "name": "Nvme0", 00:11:43.398 "trtype": "tcp", 00:11:43.398 "traddr": "10.0.0.2", 00:11:43.398 "adrfam": "ipv4", 00:11:43.398 "trsvcid": "4420", 00:11:43.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:43.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:43.398 "hdgst": false, 00:11:43.398 "ddgst": false 00:11:43.398 }, 00:11:43.398 "method": "bdev_nvme_attach_controller" 00:11:43.398 }' 00:11:43.398 [2024-07-15 23:38:18.320175] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:11:43.398 [2024-07-15 23:38:18.320262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740203 ] 00:11:43.398 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.398 [2024-07-15 23:38:18.380526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.398 [2024-07-15 23:38:18.490784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.657 Running I/O for 10 seconds... 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.657 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:43.915 23:38:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.915 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:43.915 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:43.915 23:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:44.175 [2024-07-15 23:38:19.105521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.105704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777380 is same with the state(5) to be set 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.175 23:38:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:44.175 [2024-07-15 23:38:19.118486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.175 [2024-07-15 23:38:19.118539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.175 [2024-07-15 23:38:19.118572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.175 [2024-07-15 23:38:19.118604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.175 [2024-07-15 23:38:19.118631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9790 is same with the state(5) to be set 00:11:44.175 [2024-07-15 23:38:19.118726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.118925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.118984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.175 [2024-07-15 23:38:19.119169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.175 [2024-07-15 23:38:19.119183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.119980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.119996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.176 [2024-07-15 23:38:19.120494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.176 [2024-07-15 23:38:19.120507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.177 [2024-07-15 23:38:19.120775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-07-15 23:38:19.120864] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28ea900 was disconnected and freed. reset controller. 00:11:44.177 [2024-07-15 23:38:19.122083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:44.177 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:44.177 00:11:44.177 Latency(us) 00:11:44.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.177 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:44.177 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:44.177 Verification LBA range: start 0x0 length 0x400 00:11:44.177 Nvme0n1 : 0.41 1571.55 98.22 157.15 0.00 35967.37 2997.67 34175.81 00:11:44.177 =================================================================================================================== 00:11:44.177 Total : 1571.55 98.22 157.15 0.00 35967.37 2997.67 34175.81 00:11:44.177 [2024-07-15 23:38:19.123978] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.177 [2024-07-15 23:38:19.124007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9790 (9): Bad file descriptor 00:11:44.177 [2024-07-15 23:38:19.256096] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3740203 00:11:45.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3740203) - No such process 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:45.110 { 00:11:45.110 "params": { 00:11:45.110 "name": "Nvme$subsystem", 00:11:45.110 "trtype": "$TEST_TRANSPORT", 00:11:45.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:45.110 "adrfam": "ipv4", 00:11:45.110 "trsvcid": "$NVMF_PORT", 00:11:45.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:45.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:45.110 "hdgst": ${hdgst:-false}, 00:11:45.110 "ddgst": ${ddgst:-false} 00:11:45.110 }, 00:11:45.110 "method": "bdev_nvme_attach_controller" 00:11:45.110 } 00:11:45.110 EOF 00:11:45.110 )") 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:45.110 23:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:45.110 "params": { 00:11:45.110 "name": "Nvme0", 00:11:45.110 "trtype": "tcp", 00:11:45.110 "traddr": "10.0.0.2", 00:11:45.110 "adrfam": "ipv4", 00:11:45.110 "trsvcid": "4420", 00:11:45.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:45.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:45.110 "hdgst": false, 00:11:45.110 "ddgst": false 00:11:45.110 }, 00:11:45.110 "method": "bdev_nvme_attach_controller" 00:11:45.110 }' 00:11:45.111 [2024-07-15 23:38:20.169544] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:11:45.111 [2024-07-15 23:38:20.169632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740365 ] 00:11:45.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.111 [2024-07-15 23:38:20.232850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.369 [2024-07-15 23:38:20.346796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.627 Running I/O for 1 seconds... 00:11:47.002 00:11:47.002 Latency(us) 00:11:47.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:47.002 Verification LBA range: start 0x0 length 0x400 00:11:47.002 Nvme0n1 : 1.02 1665.68 104.11 0.00 0.00 37601.15 3373.89 33204.91 00:11:47.002 =================================================================================================================== 00:11:47.002 Total : 1665.68 104.11 0.00 0.00 37601.15 3373.89 33204.91 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.002 23:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.002 rmmod nvme_tcp 00:11:47.002 rmmod nvme_fabrics 00:11:47.002 rmmod nvme_keyring 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3740039 ']' 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3740039 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3740039 ']' 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3740039 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3740039 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3740039' 00:11:47.002 killing process with pid 3740039 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3740039 00:11:47.002 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3740039 00:11:47.260 [2024-07-15 23:38:22.319500] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.260 23:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.795 23:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.795 23:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:49.795 00:11:49.795 real 0m8.875s 00:11:49.795 user 0m20.392s 00:11:49.795 sys 0m2.719s 00:11:49.795 23:38:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.795 23:38:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.795 ************************************ 00:11:49.795 END TEST nvmf_host_management 00:11:49.795 ************************************ 00:11:49.795 23:38:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:49.795 23:38:24 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:49.795 23:38:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:49.795 23:38:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.795 23:38:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:49.795 ************************************ 00:11:49.795 START TEST nvmf_lvol 00:11:49.795 ************************************ 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:49.795 * Looking for test storage... 00:11:49.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.795 23:38:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:51.694 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:51.694 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.694 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:51.695 Found net devices under 0000:09:00.0: cvl_0_0 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:51.695 Found net devices under 0000:09:00.1: cvl_0_1 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:11:51.695 00:11:51.695 --- 10.0.0.2 ping statistics --- 00:11:51.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.695 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:11:51.695 00:11:51.695 --- 10.0.0.1 ping statistics --- 00:11:51.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.695 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3742562 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3742562 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3742562 ']' 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.695 23:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:51.695 [2024-07-15 23:38:26.742595] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:11:51.695 [2024-07-15 23:38:26.742679] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.695 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.695 [2024-07-15 23:38:26.806229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.954 [2024-07-15 23:38:26.917936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.954 [2024-07-15 23:38:26.918013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.954 [2024-07-15 23:38:26.918042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.954 [2024-07-15 23:38:26.918054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.954 [2024-07-15 23:38:26.918063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.954 [2024-07-15 23:38:26.918114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.954 [2024-07-15 23:38:26.918172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.954 [2024-07-15 23:38:26.918175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.954 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:52.212 [2024-07-15 23:38:27.297457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.212 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:52.778 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:52.778 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:53.035 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:53.035 23:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:53.035 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:53.601 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=87fd24ab-273d-45a8-a1c9-af4dd349fe3b 00:11:53.601 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87fd24ab-273d-45a8-a1c9-af4dd349fe3b lvol 20 00:11:53.601 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f973be00-74c0-48d4-a811-5a5bd49bacdb 00:11:53.601 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:53.860 23:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f973be00-74c0-48d4-a811-5a5bd49bacdb 00:11:54.118 23:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:54.375 [2024-07-15 23:38:29.483144] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.633 23:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.890 23:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3742983 00:11:54.890 23:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:54.890 23:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:54.890 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.824 23:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f973be00-74c0-48d4-a811-5a5bd49bacdb MY_SNAPSHOT 00:11:56.082 23:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=97490a79-f216-4d85-86a8-ce5d8ac8eaa0 00:11:56.082 23:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f973be00-74c0-48d4-a811-5a5bd49bacdb 30 00:11:56.340 23:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 97490a79-f216-4d85-86a8-ce5d8ac8eaa0 MY_CLONE 00:11:56.907 23:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=55c6a335-2a86-4b34-bb88-1be0c1ccab6c 00:11:56.907 23:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 55c6a335-2a86-4b34-bb88-1be0c1ccab6c 00:11:57.474 23:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3742983 00:12:05.608 Initializing NVMe Controllers 00:12:05.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:05.608 Controller IO queue size 128, less than required. 00:12:05.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:05.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:05.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:05.608 Initialization complete. Launching workers. 00:12:05.608 ======================================================== 00:12:05.608 Latency(us) 00:12:05.608 Device Information : IOPS MiB/s Average min max 00:12:05.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10695.90 41.78 11968.61 2235.51 88463.34 00:12:05.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10392.50 40.60 12325.07 2238.79 80504.15 00:12:05.608 ======================================================== 00:12:05.608 Total : 21088.40 82.38 12144.28 2235.51 88463.34 00:12:05.608 00:12:05.608 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:05.608 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f973be00-74c0-48d4-a811-5a5bd49bacdb 00:12:05.608 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87fd24ab-273d-45a8-a1c9-af4dd349fe3b 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.866 23:38:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.125 rmmod nvme_tcp 00:12:06.125 rmmod nvme_fabrics 00:12:06.125 rmmod nvme_keyring 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3742562 ']' 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3742562 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3742562 ']' 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3742562 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3742562 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3742562' 00:12:06.125 killing process with pid 3742562 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3742562 00:12:06.125 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3742562 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.383 23:38:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.938 00:12:08.938 real 0m19.001s 00:12:08.938 user 1m5.052s 00:12:08.938 sys 0m5.556s 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:08.938 ************************************ 00:12:08.938 END TEST nvmf_lvol 00:12:08.938 ************************************ 00:12:08.938 23:38:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.938 23:38:43 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:08.938 23:38:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.938 23:38:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.938 23:38:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.938 ************************************ 00:12:08.938 START TEST nvmf_lvs_grow 00:12:08.938 ************************************ 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:08.938 * Looking for test storage... 00:12:08.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.938 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.939 23:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:10.837 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:10.837 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:10.837 Found net devices under 0000:09:00.0: cvl_0_0 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:10.837 Found net devices under 0000:09:00.1: cvl_0_1 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:12:10.837 00:12:10.837 --- 10.0.0.2 ping statistics --- 00:12:10.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.837 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:12:10.837 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:10.838 00:12:10.838 --- 10.0.0.1 ping statistics --- 00:12:10.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.838 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3746253 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3746253 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3746253 ']' 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.838 23:38:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.838 [2024-07-15 23:38:45.879476] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:10.838 [2024-07-15 23:38:45.879546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.838 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.838 [2024-07-15 23:38:45.942522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.096 [2024-07-15 23:38:46.048678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.096 [2024-07-15 23:38:46.048758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.096 [2024-07-15 23:38:46.048772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.096 [2024-07-15 23:38:46.048782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.096 [2024-07-15 23:38:46.048792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.096 [2024-07-15 23:38:46.048823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.096 23:38:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:11.353 [2024-07-15 23:38:46.452554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.353 23:38:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:11.353 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:11.353 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.353 23:38:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.613 ************************************ 00:12:11.613 START TEST lvs_grow_clean 00:12:11.613 ************************************ 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.613 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:11.870 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:11.870 23:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:12.128 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:12.128 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:12.128 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:12.386 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:12.386 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:12.386 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f2df9b3f-4c76-4862-8e8a-538223493dce lvol 150 00:12:12.644 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=942441be-0ff8-44d5-a6fe-634628d62733 00:12:12.644 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.644 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:12.903 [2024-07-15 23:38:47.775042] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:12.903 [2024-07-15 23:38:47.775127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:12.903 true 00:12:12.903 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:12.903 23:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:13.161 23:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:13.161 23:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:13.161 23:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 942441be-0ff8-44d5-a6fe-634628d62733 00:12:13.419 23:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:13.676 [2024-07-15 23:38:48.770065] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.676 23:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3746688 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3746688 /var/tmp/bdevperf.sock 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3746688 ']' 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.934 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:14.193 [2024-07-15 23:38:49.077810] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:14.193 [2024-07-15 23:38:49.077882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746688 ] 00:12:14.193 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.193 [2024-07-15 23:38:49.134780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.193 [2024-07-15 23:38:49.240149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.451 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.451 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:14.451 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:14.708 Nvme0n1 00:12:14.708 23:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.966 [ 00:12:14.966 { 00:12:14.966 "name": "Nvme0n1", 00:12:14.966 "aliases": [ 00:12:14.966 "942441be-0ff8-44d5-a6fe-634628d62733" 00:12:14.966 ], 00:12:14.966 "product_name": "NVMe disk", 00:12:14.966 "block_size": 4096, 00:12:14.966 "num_blocks": 38912, 00:12:14.966 "uuid": "942441be-0ff8-44d5-a6fe-634628d62733", 00:12:14.966 "assigned_rate_limits": { 00:12:14.966 "rw_ios_per_sec": 0, 00:12:14.966 "rw_mbytes_per_sec": 0, 00:12:14.966 "r_mbytes_per_sec": 0, 00:12:14.966 "w_mbytes_per_sec": 0 00:12:14.966 }, 00:12:14.966 "claimed": false, 00:12:14.966 "zoned": false, 00:12:14.966 "supported_io_types": { 00:12:14.966 "read": true, 00:12:14.966 "write": true, 00:12:14.966 "unmap": true, 00:12:14.966 "flush": true, 00:12:14.966 "reset": true, 00:12:14.966 "nvme_admin": true, 00:12:14.966 "nvme_io": true, 00:12:14.966 "nvme_io_md": false, 00:12:14.966 "write_zeroes": true, 00:12:14.966 "zcopy": false, 00:12:14.966 "get_zone_info": false, 00:12:14.966 "zone_management": false, 00:12:14.966 "zone_append": false, 00:12:14.966 "compare": true, 00:12:14.966 "compare_and_write": true, 00:12:14.966 "abort": true, 00:12:14.966 "seek_hole": false, 00:12:14.966 "seek_data": false, 00:12:14.966 "copy": true, 00:12:14.966 "nvme_iov_md": false 00:12:14.966 }, 00:12:14.966 "memory_domains": [ 00:12:14.966 { 00:12:14.966 "dma_device_id": "system", 00:12:14.966 "dma_device_type": 1 00:12:14.966 } 00:12:14.966 ], 00:12:14.966 "driver_specific": { 00:12:14.966 "nvme": [ 00:12:14.966 { 00:12:14.966 "trid": { 00:12:14.966 "trtype": "TCP", 00:12:14.966 "adrfam": "IPv4", 00:12:14.966 "traddr": "10.0.0.2", 00:12:14.966 "trsvcid": "4420", 00:12:14.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:14.966 }, 00:12:14.966 "ctrlr_data": { 00:12:14.966 "cntlid": 1, 00:12:14.966 "vendor_id": "0x8086", 00:12:14.966 "model_number": "SPDK bdev Controller", 00:12:14.966 "serial_number": "SPDK0", 00:12:14.966 "firmware_revision": "24.09", 00:12:14.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.966 "oacs": { 00:12:14.966 "security": 0, 00:12:14.966 "format": 0, 00:12:14.966 "firmware": 0, 00:12:14.966 "ns_manage": 0 00:12:14.966 }, 00:12:14.966 "multi_ctrlr": true, 00:12:14.966 "ana_reporting": false 00:12:14.966 }, 00:12:14.966 "vs": { 00:12:14.966 "nvme_version": "1.3" 00:12:14.966 }, 00:12:14.966 "ns_data": { 00:12:14.966 "id": 1, 00:12:14.966 "can_share": true 00:12:14.966 } 00:12:14.966 } 00:12:14.966 ], 00:12:14.966 "mp_policy": "active_passive" 00:12:14.966 } 00:12:14.966 } 00:12:14.966 ] 00:12:14.966 23:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3746825 00:12:14.966 23:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.966 23:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:15.224 Running I/O for 10 seconds... 00:12:16.158 Latency(us) 00:12:16.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.158 Nvme0n1 : 1.00 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:12:16.158 =================================================================================================================== 00:12:16.158 Total : 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:12:16.158 00:12:17.092 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:17.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.092 Nvme0n1 : 2.00 15689.00 61.29 0.00 0.00 0.00 0.00 0.00 00:12:17.092 =================================================================================================================== 00:12:17.092 Total : 15689.00 61.29 0.00 0.00 0.00 0.00 0.00 00:12:17.092 00:12:17.350 true 00:12:17.350 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:17.350 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:17.609 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:17.609 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:17.609 23:38:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3746825 00:12:18.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.175 Nvme0n1 : 3.00 15753.67 61.54 0.00 0.00 0.00 0.00 0.00 00:12:18.175 =================================================================================================================== 00:12:18.175 Total : 15753.67 61.54 0.00 0.00 0.00 0.00 0.00 00:12:18.175 00:12:19.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.107 Nvme0n1 : 4.00 15864.50 61.97 0.00 0.00 0.00 0.00 0.00 00:12:19.107 =================================================================================================================== 00:12:19.107 Total : 15864.50 61.97 0.00 0.00 0.00 0.00 0.00 00:12:19.107 00:12:20.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.041 Nvme0n1 : 5.00 15934.40 62.24 0.00 0.00 0.00 0.00 0.00 00:12:20.041 =================================================================================================================== 00:12:20.041 Total : 15934.40 62.24 0.00 0.00 0.00 0.00 0.00 00:12:20.041 00:12:21.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.416 Nvme0n1 : 6.00 15978.67 62.42 0.00 0.00 0.00 0.00 0.00 00:12:21.416 =================================================================================================================== 00:12:21.416 Total : 15978.67 62.42 0.00 0.00 0.00 0.00 0.00 00:12:21.416 00:12:22.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.380 Nvme0n1 : 7.00 16018.29 62.57 0.00 0.00 0.00 0.00 0.00 00:12:22.380 =================================================================================================================== 00:12:22.380 Total : 16018.29 62.57 0.00 0.00 0.00 0.00 0.00 00:12:22.380 00:12:23.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.316 Nvme0n1 : 8.00 16056.75 62.72 0.00 0.00 0.00 0.00 0.00 00:12:23.316 =================================================================================================================== 00:12:23.316 Total : 16056.75 62.72 0.00 0.00 0.00 0.00 0.00 00:12:23.316 00:12:24.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.275 Nvme0n1 : 9.00 16081.78 62.82 0.00 0.00 0.00 0.00 0.00 00:12:24.275 =================================================================================================================== 00:12:24.275 Total : 16081.78 62.82 0.00 0.00 0.00 0.00 0.00 00:12:24.275 00:12:25.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.210 Nvme0n1 : 10.00 16107.40 62.92 0.00 0.00 0.00 0.00 0.00 00:12:25.210 =================================================================================================================== 00:12:25.210 Total : 16107.40 62.92 0.00 0.00 0.00 0.00 0.00 00:12:25.210 00:12:25.210 00:12:25.210 Latency(us) 00:12:25.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.210 Nvme0n1 : 10.00 16106.89 62.92 0.00 0.00 7941.76 4587.52 14854.83 00:12:25.210 =================================================================================================================== 00:12:25.210 Total : 16106.89 62.92 0.00 0.00 7941.76 4587.52 14854.83 00:12:25.210 0 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3746688 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3746688 ']' 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3746688 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746688 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746688' 00:12:25.210 killing process with pid 3746688 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3746688 00:12:25.210 Received shutdown signal, test time was about 10.000000 seconds 00:12:25.210 00:12:25.210 Latency(us) 00:12:25.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.210 =================================================================================================================== 00:12:25.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.210 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3746688 00:12:25.468 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.726 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.983 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.983 23:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:26.240 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:26.240 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:26.240 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:26.498 [2024-07-15 23:39:01.448750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:26.498 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:26.756 request: 00:12:26.756 { 00:12:26.756 "uuid": "f2df9b3f-4c76-4862-8e8a-538223493dce", 00:12:26.756 "method": "bdev_lvol_get_lvstores", 00:12:26.756 "req_id": 1 00:12:26.756 } 00:12:26.756 Got JSON-RPC error response 00:12:26.756 response: 00:12:26.756 { 00:12:26.756 "code": -19, 00:12:26.756 "message": "No such device" 00:12:26.756 } 00:12:26.756 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:26.756 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.756 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.756 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.756 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.015 aio_bdev 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 942441be-0ff8-44d5-a6fe-634628d62733 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=942441be-0ff8-44d5-a6fe-634628d62733 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:27.015 23:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:27.272 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 942441be-0ff8-44d5-a6fe-634628d62733 -t 2000 00:12:27.530 [ 00:12:27.530 { 00:12:27.530 "name": "942441be-0ff8-44d5-a6fe-634628d62733", 00:12:27.530 "aliases": [ 00:12:27.530 "lvs/lvol" 00:12:27.530 ], 00:12:27.530 "product_name": "Logical Volume", 00:12:27.530 "block_size": 4096, 00:12:27.530 "num_blocks": 38912, 00:12:27.530 "uuid": "942441be-0ff8-44d5-a6fe-634628d62733", 00:12:27.530 "assigned_rate_limits": { 00:12:27.530 "rw_ios_per_sec": 0, 00:12:27.530 "rw_mbytes_per_sec": 0, 00:12:27.530 "r_mbytes_per_sec": 0, 00:12:27.530 "w_mbytes_per_sec": 0 00:12:27.530 }, 00:12:27.530 "claimed": false, 00:12:27.530 "zoned": false, 00:12:27.530 "supported_io_types": { 00:12:27.530 "read": true, 00:12:27.530 "write": true, 00:12:27.530 "unmap": true, 00:12:27.530 "flush": false, 00:12:27.530 "reset": true, 00:12:27.530 "nvme_admin": false, 00:12:27.530 "nvme_io": false, 00:12:27.530 "nvme_io_md": false, 00:12:27.530 "write_zeroes": true, 00:12:27.530 "zcopy": false, 00:12:27.530 "get_zone_info": false, 00:12:27.530 "zone_management": false, 00:12:27.530 "zone_append": false, 00:12:27.530 "compare": false, 00:12:27.530 "compare_and_write": false, 00:12:27.530 "abort": false, 00:12:27.530 "seek_hole": true, 00:12:27.530 "seek_data": true, 00:12:27.530 "copy": false, 00:12:27.530 "nvme_iov_md": false 00:12:27.530 }, 00:12:27.530 "driver_specific": { 00:12:27.530 "lvol": { 00:12:27.530 "lvol_store_uuid": "f2df9b3f-4c76-4862-8e8a-538223493dce", 00:12:27.530 "base_bdev": "aio_bdev", 00:12:27.530 "thin_provision": false, 00:12:27.530 "num_allocated_clusters": 38, 00:12:27.530 "snapshot": false, 00:12:27.530 "clone": false, 00:12:27.530 "esnap_clone": false 00:12:27.530 } 00:12:27.530 } 00:12:27.530 } 00:12:27.530 ] 00:12:27.530 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:27.530 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:27.530 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:27.787 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:27.787 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:27.787 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:28.045 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:28.045 23:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 942441be-0ff8-44d5-a6fe-634628d62733 00:12:28.303 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2df9b3f-4c76-4862-8e8a-538223493dce 00:12:28.561 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.819 00:12:28.819 real 0m17.335s 00:12:28.819 user 0m15.916s 00:12:28.819 sys 0m2.339s 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:28.819 ************************************ 00:12:28.819 END TEST lvs_grow_clean 00:12:28.819 ************************************ 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:28.819 ************************************ 00:12:28.819 START TEST lvs_grow_dirty 00:12:28.819 ************************************ 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.819 23:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.078 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:29.078 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:29.336 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4f30d968-5373-4dcc-8284-927861417040 00:12:29.336 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:29.336 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:29.595 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:29.595 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:29.595 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f30d968-5373-4dcc-8284-927861417040 lvol 150 00:12:29.854 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:29.854 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:29.854 23:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:30.112 [2024-07-15 23:39:05.189129] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:30.112 [2024-07-15 23:39:05.189215] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:30.112 true 00:12:30.112 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:30.112 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:30.369 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:30.369 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:30.626 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:30.884 23:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:31.142 [2024-07-15 23:39:06.228330] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.142 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3748854 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3748854 /var/tmp/bdevperf.sock 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3748854 ']' 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.400 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.659 [2024-07-15 23:39:06.533092] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:31.659 [2024-07-15 23:39:06.533167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748854 ] 00:12:31.659 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.659 [2024-07-15 23:39:06.591682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.659 [2024-07-15 23:39:06.699641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.918 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.918 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:31.918 23:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:32.176 Nvme0n1 00:12:32.176 23:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:32.434 [ 00:12:32.434 { 00:12:32.434 "name": "Nvme0n1", 00:12:32.434 "aliases": [ 00:12:32.434 "7a47be31-2750-4a61-aea1-bd2d987aeaf1" 00:12:32.434 ], 00:12:32.434 "product_name": "NVMe disk", 00:12:32.434 "block_size": 4096, 00:12:32.434 "num_blocks": 38912, 00:12:32.434 "uuid": "7a47be31-2750-4a61-aea1-bd2d987aeaf1", 00:12:32.434 "assigned_rate_limits": { 00:12:32.434 "rw_ios_per_sec": 0, 00:12:32.434 "rw_mbytes_per_sec": 0, 00:12:32.434 "r_mbytes_per_sec": 0, 00:12:32.434 "w_mbytes_per_sec": 0 00:12:32.434 }, 00:12:32.434 "claimed": false, 00:12:32.434 "zoned": false, 00:12:32.434 "supported_io_types": { 00:12:32.434 "read": true, 00:12:32.434 "write": true, 00:12:32.434 "unmap": true, 00:12:32.434 "flush": true, 00:12:32.434 "reset": true, 00:12:32.434 "nvme_admin": true, 00:12:32.434 "nvme_io": true, 00:12:32.434 "nvme_io_md": false, 00:12:32.434 "write_zeroes": true, 00:12:32.434 "zcopy": false, 00:12:32.434 "get_zone_info": false, 00:12:32.434 "zone_management": false, 00:12:32.434 "zone_append": false, 00:12:32.434 "compare": true, 00:12:32.434 "compare_and_write": true, 00:12:32.434 "abort": true, 00:12:32.434 "seek_hole": false, 00:12:32.434 "seek_data": false, 00:12:32.434 "copy": true, 00:12:32.434 "nvme_iov_md": false 00:12:32.434 }, 00:12:32.434 "memory_domains": [ 00:12:32.434 { 00:12:32.434 "dma_device_id": "system", 00:12:32.434 "dma_device_type": 1 00:12:32.434 } 00:12:32.434 ], 00:12:32.434 "driver_specific": { 00:12:32.434 "nvme": [ 00:12:32.434 { 00:12:32.434 "trid": { 00:12:32.434 "trtype": "TCP", 00:12:32.434 "adrfam": "IPv4", 00:12:32.434 "traddr": "10.0.0.2", 00:12:32.434 "trsvcid": "4420", 00:12:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:32.434 }, 00:12:32.434 "ctrlr_data": { 00:12:32.434 "cntlid": 1, 00:12:32.434 "vendor_id": "0x8086", 00:12:32.434 "model_number": "SPDK bdev Controller", 00:12:32.434 "serial_number": "SPDK0", 00:12:32.434 "firmware_revision": "24.09", 00:12:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:32.434 "oacs": { 00:12:32.434 "security": 0, 00:12:32.434 "format": 0, 00:12:32.434 "firmware": 0, 00:12:32.434 "ns_manage": 0 00:12:32.434 }, 00:12:32.434 "multi_ctrlr": true, 00:12:32.434 "ana_reporting": false 00:12:32.434 }, 00:12:32.434 "vs": { 00:12:32.434 "nvme_version": "1.3" 00:12:32.434 }, 00:12:32.434 "ns_data": { 00:12:32.434 "id": 1, 00:12:32.434 "can_share": true 00:12:32.434 } 00:12:32.434 } 00:12:32.434 ], 00:12:32.434 "mp_policy": "active_passive" 00:12:32.434 } 00:12:32.434 } 00:12:32.434 ] 00:12:32.434 23:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3749145 00:12:32.434 23:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:32.434 23:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:32.691 Running I/O for 10 seconds... 00:12:33.623 Latency(us) 00:12:33.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.623 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:12:33.623 =================================================================================================================== 00:12:33.623 Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:12:33.623 00:12:34.600 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4f30d968-5373-4dcc-8284-927861417040 00:12:34.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.600 Nvme0n1 : 2.00 15274.50 59.67 0.00 0.00 0.00 0.00 0.00 00:12:34.600 =================================================================================================================== 00:12:34.600 Total : 15274.50 59.67 0.00 0.00 0.00 0.00 0.00 00:12:34.600 00:12:34.600 true 00:12:34.600 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:34.600 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:34.857 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:34.857 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:34.857 23:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3749145 00:12:35.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.789 Nvme0n1 : 3.00 15347.67 59.95 0.00 0.00 0.00 0.00 0.00 00:12:35.789 =================================================================================================================== 00:12:35.789 Total : 15347.67 59.95 0.00 0.00 0.00 0.00 0.00 00:12:35.789 00:12:36.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.722 Nvme0n1 : 4.00 15479.50 60.47 0.00 0.00 0.00 0.00 0.00 00:12:36.722 =================================================================================================================== 00:12:36.722 Total : 15479.50 60.47 0.00 0.00 0.00 0.00 0.00 00:12:36.722 00:12:37.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.654 Nvme0n1 : 5.00 15559.20 60.78 0.00 0.00 0.00 0.00 0.00 00:12:37.654 =================================================================================================================== 00:12:37.654 Total : 15559.20 60.78 0.00 0.00 0.00 0.00 0.00 00:12:37.654 00:12:38.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.637 Nvme0n1 : 6.00 15614.83 61.00 0.00 0.00 0.00 0.00 0.00 00:12:38.637 =================================================================================================================== 00:12:38.637 Total : 15614.83 61.00 0.00 0.00 0.00 0.00 0.00 00:12:38.637 00:12:39.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.569 Nvme0n1 : 7.00 15670.14 61.21 0.00 0.00 0.00 0.00 0.00 00:12:39.569 =================================================================================================================== 00:12:39.569 Total : 15670.14 61.21 0.00 0.00 0.00 0.00 0.00 00:12:39.569 00:12:40.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.502 Nvme0n1 : 8.00 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:12:40.502 =================================================================================================================== 00:12:40.502 Total : 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:12:40.502 00:12:41.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.874 Nvme0n1 : 9.00 15758.33 61.56 0.00 0.00 0.00 0.00 0.00 00:12:41.874 =================================================================================================================== 00:12:41.874 Total : 15758.33 61.56 0.00 0.00 0.00 0.00 0.00 00:12:41.874 00:12:42.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.809 Nvme0n1 : 10.00 15784.40 61.66 0.00 0.00 0.00 0.00 0.00 00:12:42.809 =================================================================================================================== 00:12:42.809 Total : 15784.40 61.66 0.00 0.00 0.00 0.00 0.00 00:12:42.809 00:12:42.809 00:12:42.809 Latency(us) 00:12:42.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.809 Nvme0n1 : 10.00 15789.10 61.68 0.00 0.00 8102.16 3070.48 16019.91 00:12:42.809 =================================================================================================================== 00:12:42.809 Total : 15789.10 61.68 0.00 0.00 8102.16 3070.48 16019.91 00:12:42.809 0 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3748854 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3748854 ']' 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3748854 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748854 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748854' 00:12:42.809 killing process with pid 3748854 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3748854 00:12:42.809 Received shutdown signal, test time was about 10.000000 seconds 00:12:42.809 00:12:42.809 Latency(us) 00:12:42.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.809 =================================================================================================================== 00:12:42.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3748854 00:12:42.809 23:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:43.067 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3746253 00:12:43.632 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3746253 00:12:43.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3746253 Killed "${NVMF_APP[@]}" "$@" 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:43.890 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3750830 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3750830 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3750830 ']' 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.891 23:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:43.891 [2024-07-15 23:39:18.826094] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:43.891 [2024-07-15 23:39:18.826187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.891 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.891 [2024-07-15 23:39:18.892383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.891 [2024-07-15 23:39:18.993381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.891 [2024-07-15 23:39:18.993441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.891 [2024-07-15 23:39:18.993464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.891 [2024-07-15 23:39:18.993474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.891 [2024-07-15 23:39:18.993483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.891 [2024-07-15 23:39:18.993508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.148 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:44.407 [2024-07-15 23:39:19.348298] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:44.407 [2024-07-15 23:39:19.348429] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:44.407 [2024-07-15 23:39:19.348477] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:44.407 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:44.665 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7a47be31-2750-4a61-aea1-bd2d987aeaf1 -t 2000 00:12:44.924 [ 00:12:44.924 { 00:12:44.924 "name": "7a47be31-2750-4a61-aea1-bd2d987aeaf1", 00:12:44.924 "aliases": [ 00:12:44.924 "lvs/lvol" 00:12:44.924 ], 00:12:44.924 "product_name": "Logical Volume", 00:12:44.924 "block_size": 4096, 00:12:44.924 "num_blocks": 38912, 00:12:44.924 "uuid": "7a47be31-2750-4a61-aea1-bd2d987aeaf1", 00:12:44.924 "assigned_rate_limits": { 00:12:44.924 "rw_ios_per_sec": 0, 00:12:44.924 "rw_mbytes_per_sec": 0, 00:12:44.924 "r_mbytes_per_sec": 0, 00:12:44.924 "w_mbytes_per_sec": 0 00:12:44.924 }, 00:12:44.924 "claimed": false, 00:12:44.924 "zoned": false, 00:12:44.924 "supported_io_types": { 00:12:44.924 "read": true, 00:12:44.924 "write": true, 00:12:44.924 "unmap": true, 00:12:44.924 "flush": false, 00:12:44.924 "reset": true, 00:12:44.924 "nvme_admin": false, 00:12:44.924 "nvme_io": false, 00:12:44.924 "nvme_io_md": false, 00:12:44.924 "write_zeroes": true, 00:12:44.924 "zcopy": false, 00:12:44.924 "get_zone_info": false, 00:12:44.924 "zone_management": false, 00:12:44.924 "zone_append": false, 00:12:44.924 "compare": false, 00:12:44.924 "compare_and_write": false, 00:12:44.924 "abort": false, 00:12:44.924 "seek_hole": true, 00:12:44.924 "seek_data": true, 00:12:44.924 "copy": false, 00:12:44.924 "nvme_iov_md": false 00:12:44.924 }, 00:12:44.924 "driver_specific": { 00:12:44.924 "lvol": { 00:12:44.924 "lvol_store_uuid": "4f30d968-5373-4dcc-8284-927861417040", 00:12:44.924 "base_bdev": "aio_bdev", 00:12:44.924 "thin_provision": false, 00:12:44.924 "num_allocated_clusters": 38, 00:12:44.924 "snapshot": false, 00:12:44.924 "clone": false, 00:12:44.924 "esnap_clone": false 00:12:44.924 } 00:12:44.924 } 00:12:44.924 } 00:12:44.924 ] 00:12:44.924 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:44.924 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:44.924 23:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:45.182 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:45.182 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:45.182 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:45.440 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:45.440 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:45.698 [2024-07-15 23:39:20.657599] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:45.698 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:45.955 request: 00:12:45.955 { 00:12:45.955 "uuid": "4f30d968-5373-4dcc-8284-927861417040", 00:12:45.955 "method": "bdev_lvol_get_lvstores", 00:12:45.955 "req_id": 1 00:12:45.955 } 00:12:45.955 Got JSON-RPC error response 00:12:45.955 response: 00:12:45.955 { 00:12:45.955 "code": -19, 00:12:45.955 "message": "No such device" 00:12:45.955 } 00:12:45.955 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:45.955 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:45.955 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:45.955 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:45.955 23:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:46.213 aio_bdev 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:46.213 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:46.472 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7a47be31-2750-4a61-aea1-bd2d987aeaf1 -t 2000 00:12:46.730 [ 00:12:46.730 { 00:12:46.730 "name": "7a47be31-2750-4a61-aea1-bd2d987aeaf1", 00:12:46.730 "aliases": [ 00:12:46.730 "lvs/lvol" 00:12:46.730 ], 00:12:46.730 "product_name": "Logical Volume", 00:12:46.730 "block_size": 4096, 00:12:46.730 "num_blocks": 38912, 00:12:46.730 "uuid": "7a47be31-2750-4a61-aea1-bd2d987aeaf1", 00:12:46.730 "assigned_rate_limits": { 00:12:46.730 "rw_ios_per_sec": 0, 00:12:46.730 "rw_mbytes_per_sec": 0, 00:12:46.730 "r_mbytes_per_sec": 0, 00:12:46.730 "w_mbytes_per_sec": 0 00:12:46.730 }, 00:12:46.730 "claimed": false, 00:12:46.730 "zoned": false, 00:12:46.730 "supported_io_types": { 00:12:46.730 "read": true, 00:12:46.730 "write": true, 00:12:46.730 "unmap": true, 00:12:46.730 "flush": false, 00:12:46.730 "reset": true, 00:12:46.730 "nvme_admin": false, 00:12:46.730 "nvme_io": false, 00:12:46.730 "nvme_io_md": false, 00:12:46.730 "write_zeroes": true, 00:12:46.730 "zcopy": false, 00:12:46.730 "get_zone_info": false, 00:12:46.730 "zone_management": false, 00:12:46.730 "zone_append": false, 00:12:46.730 "compare": false, 00:12:46.730 "compare_and_write": false, 00:12:46.730 "abort": false, 00:12:46.730 "seek_hole": true, 00:12:46.730 "seek_data": true, 00:12:46.730 "copy": false, 00:12:46.730 "nvme_iov_md": false 00:12:46.730 }, 00:12:46.730 "driver_specific": { 00:12:46.730 "lvol": { 00:12:46.730 "lvol_store_uuid": "4f30d968-5373-4dcc-8284-927861417040", 00:12:46.730 "base_bdev": "aio_bdev", 00:12:46.730 "thin_provision": false, 00:12:46.730 "num_allocated_clusters": 38, 00:12:46.730 "snapshot": false, 00:12:46.730 "clone": false, 00:12:46.730 "esnap_clone": false 00:12:46.730 } 00:12:46.730 } 00:12:46.730 } 00:12:46.730 ] 00:12:46.730 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:46.730 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:46.730 23:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:46.988 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:46.989 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f30d968-5373-4dcc-8284-927861417040 00:12:46.989 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:47.246 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:47.246 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a47be31-2750-4a61-aea1-bd2d987aeaf1 00:12:47.504 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f30d968-5373-4dcc-8284-927861417040 00:12:47.762 23:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:48.021 00:12:48.021 real 0m19.178s 00:12:48.021 user 0m48.316s 00:12:48.021 sys 0m4.708s 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:48.021 ************************************ 00:12:48.021 END TEST lvs_grow_dirty 00:12:48.021 ************************************ 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:48.021 nvmf_trace.0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.021 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.021 rmmod nvme_tcp 00:12:48.021 rmmod nvme_fabrics 00:12:48.290 rmmod nvme_keyring 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3750830 ']' 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3750830 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3750830 ']' 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3750830 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3750830 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3750830' 00:12:48.290 killing process with pid 3750830 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3750830 00:12:48.290 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3750830 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.553 23:39:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.460 23:39:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:50.460 00:12:50.460 real 0m42.020s 00:12:50.460 user 1m10.089s 00:12:50.460 sys 0m8.994s 00:12:50.460 23:39:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.460 23:39:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 ************************************ 00:12:50.460 END TEST nvmf_lvs_grow 00:12:50.460 ************************************ 00:12:50.460 23:39:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:50.460 23:39:25 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:50.460 23:39:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:50.460 23:39:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.460 23:39:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 ************************************ 00:12:50.460 START TEST nvmf_bdev_io_wait 00:12:50.460 ************************************ 00:12:50.460 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:50.718 * Looking for test storage... 00:12:50.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.718 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:50.719 23:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.244 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:53.245 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:53.245 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:53.245 Found net devices under 0000:09:00.0: cvl_0_0 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:53.245 Found net devices under 0000:09:00.1: cvl_0_1 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:12:53.245 00:12:53.245 --- 10.0.0.2 ping statistics --- 00:12:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.245 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:53.245 00:12:53.245 --- 10.0.0.1 ping statistics --- 00:12:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.245 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3753356 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3753356 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3753356 ']' 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.245 23:39:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 [2024-07-15 23:39:28.004813] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:53.245 [2024-07-15 23:39:28.004898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.245 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.245 [2024-07-15 23:39:28.069443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.245 [2024-07-15 23:39:28.183247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.245 [2024-07-15 23:39:28.183307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.245 [2024-07-15 23:39:28.183320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.245 [2024-07-15 23:39:28.183331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.245 [2024-07-15 23:39:28.183341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.245 [2024-07-15 23:39:28.183421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.245 [2024-07-15 23:39:28.183496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.245 [2024-07-15 23:39:28.183564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.245 [2024-07-15 23:39:28.183567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 [2024-07-15 23:39:28.323611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.245 Malloc0 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.245 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.503 [2024-07-15 23:39:28.382471] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3753477 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3753481 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.503 { 00:12:53.503 "params": { 00:12:53.503 "name": "Nvme$subsystem", 00:12:53.503 "trtype": "$TEST_TRANSPORT", 00:12:53.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.503 "adrfam": "ipv4", 00:12:53.503 "trsvcid": "$NVMF_PORT", 00:12:53.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.503 "hdgst": ${hdgst:-false}, 00:12:53.503 "ddgst": ${ddgst:-false} 00:12:53.503 }, 00:12:53.503 "method": "bdev_nvme_attach_controller" 00:12:53.503 } 00:12:53.503 EOF 00:12:53.503 )") 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3753484 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.503 { 00:12:53.503 "params": { 00:12:53.503 "name": "Nvme$subsystem", 00:12:53.503 "trtype": "$TEST_TRANSPORT", 00:12:53.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.503 "adrfam": "ipv4", 00:12:53.503 "trsvcid": "$NVMF_PORT", 00:12:53.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.503 "hdgst": ${hdgst:-false}, 00:12:53.503 "ddgst": ${ddgst:-false} 00:12:53.503 }, 00:12:53.503 "method": "bdev_nvme_attach_controller" 00:12:53.503 } 00:12:53.503 EOF 00:12:53.503 )") 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3753488 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:53.503 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.504 { 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme$subsystem", 00:12:53.504 "trtype": "$TEST_TRANSPORT", 00:12:53.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "$NVMF_PORT", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.504 "hdgst": ${hdgst:-false}, 00:12:53.504 "ddgst": ${ddgst:-false} 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 } 00:12:53.504 EOF 00:12:53.504 )") 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:53.504 { 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme$subsystem", 00:12:53.504 "trtype": "$TEST_TRANSPORT", 00:12:53.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "$NVMF_PORT", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:53.504 "hdgst": ${hdgst:-false}, 00:12:53.504 "ddgst": ${ddgst:-false} 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 } 00:12:53.504 EOF 00:12:53.504 )") 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3753477 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme1", 00:12:53.504 "trtype": "tcp", 00:12:53.504 "traddr": "10.0.0.2", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "4420", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.504 "hdgst": false, 00:12:53.504 "ddgst": false 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 }' 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme1", 00:12:53.504 "trtype": "tcp", 00:12:53.504 "traddr": "10.0.0.2", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "4420", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.504 "hdgst": false, 00:12:53.504 "ddgst": false 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 }' 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme1", 00:12:53.504 "trtype": "tcp", 00:12:53.504 "traddr": "10.0.0.2", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "4420", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.504 "hdgst": false, 00:12:53.504 "ddgst": false 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 }' 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:53.504 23:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:53.504 "params": { 00:12:53.504 "name": "Nvme1", 00:12:53.504 "trtype": "tcp", 00:12:53.504 "traddr": "10.0.0.2", 00:12:53.504 "adrfam": "ipv4", 00:12:53.504 "trsvcid": "4420", 00:12:53.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.504 "hdgst": false, 00:12:53.504 "ddgst": false 00:12:53.504 }, 00:12:53.504 "method": "bdev_nvme_attach_controller" 00:12:53.504 }' 00:12:53.504 [2024-07-15 23:39:28.431204] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:53.504 [2024-07-15 23:39:28.431205] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:53.504 [2024-07-15 23:39:28.431303] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 23:39:28.431303] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:53.504 --proc-type=auto ] 00:12:53.504 [2024-07-15 23:39:28.431451] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:53.504 [2024-07-15 23:39:28.431526] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:53.504 [2024-07-15 23:39:28.440155] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:12:53.504 [2024-07-15 23:39:28.440235] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:53.504 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.504 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.504 [2024-07-15 23:39:28.606119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.762 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.762 [2024-07-15 23:39:28.706595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:53.762 [2024-07-15 23:39:28.709053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.762 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.762 [2024-07-15 23:39:28.784255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.762 [2024-07-15 23:39:28.810178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:53.762 [2024-07-15 23:39:28.854927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.762 [2024-07-15 23:39:28.879306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:54.019 [2024-07-15 23:39:28.948002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:54.019 Running I/O for 1 seconds... 00:12:54.019 Running I/O for 1 seconds... 00:12:54.019 Running I/O for 1 seconds... 00:12:54.276 Running I/O for 1 seconds... 00:12:55.213 00:12:55.213 Latency(us) 00:12:55.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.213 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:55.213 Nvme1n1 : 1.00 196493.25 767.55 0.00 0.00 648.86 259.41 1049.79 00:12:55.213 =================================================================================================================== 00:12:55.213 Total : 196493.25 767.55 0.00 0.00 648.86 259.41 1049.79 00:12:55.213 00:12:55.213 Latency(us) 00:12:55.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.213 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:55.213 Nvme1n1 : 1.06 5179.55 20.23 0.00 0.00 23546.19 9320.68 64468.01 00:12:55.213 =================================================================================================================== 00:12:55.213 Total : 5179.55 20.23 0.00 0.00 23546.19 9320.68 64468.01 00:12:55.213 00:12:55.213 Latency(us) 00:12:55.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.213 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:55.213 Nvme1n1 : 1.01 5095.09 19.90 0.00 0.00 25017.40 7718.68 43302.31 00:12:55.213 =================================================================================================================== 00:12:55.213 Total : 5095.09 19.90 0.00 0.00 25017.40 7718.68 43302.31 00:12:55.213 00:12:55.213 Latency(us) 00:12:55.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.213 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:55.213 Nvme1n1 : 1.01 10618.92 41.48 0.00 0.00 12006.80 6650.69 23592.96 00:12:55.213 =================================================================================================================== 00:12:55.213 Total : 10618.92 41.48 0.00 0.00 12006.80 6650.69 23592.96 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3753481 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3753484 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3753488 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.472 rmmod nvme_tcp 00:12:55.472 rmmod nvme_fabrics 00:12:55.472 rmmod nvme_keyring 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3753356 ']' 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3753356 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3753356 ']' 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3753356 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3753356 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3753356' 00:12:55.472 killing process with pid 3753356 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3753356 00:12:55.472 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3753356 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.749 23:39:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.319 23:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.319 00:12:58.319 real 0m7.311s 00:12:58.319 user 0m16.608s 00:12:58.319 sys 0m3.579s 00:12:58.319 23:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.319 23:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:58.319 ************************************ 00:12:58.319 END TEST nvmf_bdev_io_wait 00:12:58.319 ************************************ 00:12:58.319 23:39:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:58.319 23:39:32 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:58.319 23:39:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:58.319 23:39:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.319 23:39:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.319 ************************************ 00:12:58.319 START TEST nvmf_queue_depth 00:12:58.319 ************************************ 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:58.319 * Looking for test storage... 00:12:58.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.319 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.320 23:39:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.320 23:39:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:00.221 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:00.221 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:00.221 Found net devices under 0000:09:00.0: cvl_0_0 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:00.221 Found net devices under 0000:09:00.1: cvl_0_1 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:13:00.221 00:13:00.221 --- 10.0.0.2 ping statistics --- 00:13:00.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.221 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:00.221 00:13:00.221 --- 10.0.0.1 ping statistics --- 00:13:00.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.221 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3755694 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3755694 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3755694 ']' 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.221 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.221 [2024-07-15 23:39:35.286692] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:00.221 [2024-07-15 23:39:35.286777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.221 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.480 [2024-07-15 23:39:35.347395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.480 [2024-07-15 23:39:35.446707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.480 [2024-07-15 23:39:35.446779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.480 [2024-07-15 23:39:35.446806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.480 [2024-07-15 23:39:35.446817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.480 [2024-07-15 23:39:35.446826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.480 [2024-07-15 23:39:35.446852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.480 [2024-07-15 23:39:35.587825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.480 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.738 Malloc0 00:13:00.738 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.738 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:00.738 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.739 [2024-07-15 23:39:35.646435] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3755740 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3755740 /var/tmp/bdevperf.sock 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3755740 ']' 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.739 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.739 [2024-07-15 23:39:35.689419] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:00.739 [2024-07-15 23:39:35.689496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755740 ] 00:13:00.739 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.739 [2024-07-15 23:39:35.746530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.739 [2024-07-15 23:39:35.851192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.997 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.997 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:00.997 23:39:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:00.997 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.997 23:39:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:00.997 NVMe0n1 00:13:00.997 23:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.997 23:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:01.255 Running I/O for 10 seconds... 00:13:11.229 00:13:11.229 Latency(us) 00:13:11.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.229 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:11.229 Verification LBA range: start 0x0 length 0x4000 00:13:11.229 NVMe0n1 : 10.07 8959.50 35.00 0.00 0.00 113799.50 9951.76 70293.43 00:13:11.229 =================================================================================================================== 00:13:11.229 Total : 8959.50 35.00 0.00 0.00 113799.50 9951.76 70293.43 00:13:11.229 0 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3755740 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3755740 ']' 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3755740 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3755740 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3755740' 00:13:11.229 killing process with pid 3755740 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3755740 00:13:11.229 Received shutdown signal, test time was about 10.000000 seconds 00:13:11.229 00:13:11.229 Latency(us) 00:13:11.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.229 =================================================================================================================== 00:13:11.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.229 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3755740 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.487 rmmod nvme_tcp 00:13:11.487 rmmod nvme_fabrics 00:13:11.487 rmmod nvme_keyring 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3755694 ']' 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3755694 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3755694 ']' 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3755694 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.487 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3755694 00:13:11.745 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:11.745 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:11.745 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3755694' 00:13:11.745 killing process with pid 3755694 00:13:11.745 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3755694 00:13:11.745 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3755694 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.006 23:39:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.911 23:39:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.911 00:13:13.911 real 0m16.039s 00:13:13.911 user 0m20.437s 00:13:13.911 sys 0m3.996s 00:13:13.911 23:39:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.911 23:39:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:13.911 ************************************ 00:13:13.911 END TEST nvmf_queue_depth 00:13:13.911 ************************************ 00:13:13.911 23:39:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.911 23:39:48 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:13.911 23:39:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.911 23:39:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.911 23:39:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.911 ************************************ 00:13:13.911 START TEST nvmf_target_multipath 00:13:13.911 ************************************ 00:13:13.911 23:39:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:14.168 * Looking for test storage... 00:13:14.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.168 23:39:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.169 23:39:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:16.700 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:16.700 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:16.700 Found net devices under 0000:09:00.0: cvl_0_0 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.700 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:16.701 Found net devices under 0000:09:00.1: cvl_0_1 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:16.701 00:13:16.701 --- 10.0.0.2 ping statistics --- 00:13:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.701 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:16.701 00:13:16.701 --- 10.0.0.1 ping statistics --- 00:13:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.701 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:16.701 only one NIC for nvmf test 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.701 rmmod nvme_tcp 00:13:16.701 rmmod nvme_fabrics 00:13:16.701 rmmod nvme_keyring 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.701 23:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.607 00:13:18.607 real 0m4.501s 00:13:18.607 user 0m0.874s 00:13:18.607 sys 0m1.633s 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.607 23:39:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 ************************************ 00:13:18.607 END TEST nvmf_target_multipath 00:13:18.607 ************************************ 00:13:18.607 23:39:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.607 23:39:53 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:18.607 23:39:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.607 23:39:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.607 23:39:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 ************************************ 00:13:18.607 START TEST nvmf_zcopy 00:13:18.607 ************************************ 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:18.607 * Looking for test storage... 00:13:18.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.607 23:39:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:21.136 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:21.136 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.136 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:21.137 Found net devices under 0000:09:00.0: cvl_0_0 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:21.137 Found net devices under 0000:09:00.1: cvl_0_1 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:21.137 00:13:21.137 --- 10.0.0.2 ping statistics --- 00:13:21.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.137 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:21.137 00:13:21.137 --- 10.0.0.1 ping statistics --- 00:13:21.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.137 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3760930 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3760930 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3760930 ']' 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.137 23:39:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.137 [2024-07-15 23:39:55.948287] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:21.137 [2024-07-15 23:39:55.948383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.137 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.137 [2024-07-15 23:39:56.016499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.137 [2024-07-15 23:39:56.126888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.137 [2024-07-15 23:39:56.126948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.137 [2024-07-15 23:39:56.126985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.137 [2024-07-15 23:39:56.126997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.137 [2024-07-15 23:39:56.127007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.137 [2024-07-15 23:39:56.127040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.137 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.137 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:21.137 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.137 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.137 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 [2024-07-15 23:39:56.272998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 [2024-07-15 23:39:56.289210] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 malloc0 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:21.396 { 00:13:21.396 "params": { 00:13:21.396 "name": "Nvme$subsystem", 00:13:21.396 "trtype": "$TEST_TRANSPORT", 00:13:21.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.396 "adrfam": "ipv4", 00:13:21.396 "trsvcid": "$NVMF_PORT", 00:13:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.396 "hdgst": ${hdgst:-false}, 00:13:21.396 "ddgst": ${ddgst:-false} 00:13:21.396 }, 00:13:21.396 "method": "bdev_nvme_attach_controller" 00:13:21.396 } 00:13:21.396 EOF 00:13:21.396 )") 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:21.396 23:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:21.396 "params": { 00:13:21.396 "name": "Nvme1", 00:13:21.396 "trtype": "tcp", 00:13:21.396 "traddr": "10.0.0.2", 00:13:21.396 "adrfam": "ipv4", 00:13:21.396 "trsvcid": "4420", 00:13:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.396 "hdgst": false, 00:13:21.396 "ddgst": false 00:13:21.396 }, 00:13:21.396 "method": "bdev_nvme_attach_controller" 00:13:21.397 }' 00:13:21.397 [2024-07-15 23:39:56.373846] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:21.397 [2024-07-15 23:39:56.373930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760958 ] 00:13:21.397 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.397 [2024-07-15 23:39:56.438161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.656 [2024-07-15 23:39:56.549144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.914 Running I/O for 10 seconds... 00:13:31.904 00:13:31.904 Latency(us) 00:13:31.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:31.904 Verification LBA range: start 0x0 length 0x1000 00:13:31.904 Nvme1n1 : 10.02 5887.02 45.99 0.00 0.00 21683.84 3349.62 30098.01 00:13:31.904 =================================================================================================================== 00:13:31.904 Total : 5887.02 45.99 0.00 0.00 21683.84 3349.62 30098.01 00:13:32.161 23:40:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3762152 00:13:32.161 23:40:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.162 { 00:13:32.162 "params": { 00:13:32.162 "name": "Nvme$subsystem", 00:13:32.162 "trtype": "$TEST_TRANSPORT", 00:13:32.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.162 "adrfam": "ipv4", 00:13:32.162 "trsvcid": "$NVMF_PORT", 00:13:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.162 "hdgst": ${hdgst:-false}, 00:13:32.162 "ddgst": ${ddgst:-false} 00:13:32.162 }, 00:13:32.162 "method": "bdev_nvme_attach_controller" 00:13:32.162 } 00:13:32.162 EOF 00:13:32.162 )") 00:13:32.162 [2024-07-15 23:40:07.199704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.199742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:32.162 23:40:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.162 "params": { 00:13:32.162 "name": "Nvme1", 00:13:32.162 "trtype": "tcp", 00:13:32.162 "traddr": "10.0.0.2", 00:13:32.162 "adrfam": "ipv4", 00:13:32.162 "trsvcid": "4420", 00:13:32.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.162 "hdgst": false, 00:13:32.162 "ddgst": false 00:13:32.162 }, 00:13:32.162 "method": "bdev_nvme_attach_controller" 00:13:32.162 }' 00:13:32.162 [2024-07-15 23:40:07.207675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.207697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.215701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.215722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.223719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.223739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.231745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.231766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.239770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.239792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.242398] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:32.162 [2024-07-15 23:40:07.242463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762152 ] 00:13:32.162 [2024-07-15 23:40:07.247782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.247802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.255802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.255822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.263824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.263844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 [2024-07-15 23:40:07.271849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.271870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.162 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.162 [2024-07-15 23:40:07.279880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.162 [2024-07-15 23:40:07.279907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.287913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.287935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.295924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.295965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.303969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.303996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.305475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.419 [2024-07-15 23:40:07.312030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.312065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.320045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.320081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.328028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.328049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.336051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.336073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.344083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.344104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.352088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.352110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.360111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.360133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.368157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.368192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.376154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.376176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.384171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.384192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.392194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.392215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.400216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.400251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.408253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.408273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.415107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.419 [2024-07-15 23:40:07.416271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.416292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.424291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.424316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.432340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.432374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.440368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.440406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.448416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.448453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.456416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.456454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.464436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.419 [2024-07-15 23:40:07.464474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.419 [2024-07-15 23:40:07.472438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.472465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.480473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.480506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.488502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.488539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.496496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.496519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.504512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.504531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.512554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.512576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.520636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.520661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.528584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.528606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.420 [2024-07-15 23:40:07.536605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.420 [2024-07-15 23:40:07.536626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.544644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.544666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.552667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.552689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.560674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.560695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.568693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.568713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.576719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.576749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.584772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.584793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 Running I/O for 5 seconds... 00:13:32.678 [2024-07-15 23:40:07.592771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.592791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.608115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.608144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.620542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.620570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.632818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.632845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.644856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.644883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.656750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.656777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.668768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.668794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.681233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.681275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.693184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.693212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.705159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.705186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.717142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.717169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.729743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.729770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.741776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.741802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.753793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.753820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.765135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.765162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.776967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.776994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.788614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.788641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.678 [2024-07-15 23:40:07.800444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.678 [2024-07-15 23:40:07.800471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.812307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.812348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.824058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.824100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.836223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.836250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.847795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.847821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.859685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.859730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.871782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.871809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.885625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.885652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.896797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.896823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.908848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.908875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.920334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.920361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.933932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.933968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.944474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.944516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.956835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.956863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.969198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.969249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.980713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.980739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:07.992485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:07.992511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:08.004443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:08.004483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:08.016172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:08.016199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:08.027642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:08.027669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:08.039484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:08.039510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.935 [2024-07-15 23:40:08.050812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.935 [2024-07-15 23:40:08.050839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.062360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.062387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.073915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.073965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.085737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.085763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.097530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.097557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.111029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.111057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.122298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.122325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.134172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.134215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.145938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.145975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.157797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.157836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.169269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.169297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.181143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.181171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.193091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.193119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.204820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.204846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.216619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.216660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.228729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.228757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.240568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.240602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.254422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.254448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.265288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.265314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.276927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.276961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.288320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.288348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.300131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.300159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.193 [2024-07-15 23:40:08.312367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.193 [2024-07-15 23:40:08.312393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.323459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.323485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.335567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.335593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.347759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.347786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.359490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.359517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.371179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.371207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.382947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.382983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.394481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.394508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.408717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.408744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.420453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.420481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.433863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.433891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.444751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.444779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.456383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.456410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.469789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.469824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.480731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.480758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.493024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.493051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.504987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.505019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.516818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.516845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.530156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.530183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.540745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.540771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.553079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.553106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.451 [2024-07-15 23:40:08.565285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.451 [2024-07-15 23:40:08.565312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.577298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.577325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.589767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.589793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.601828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.601855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.613357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.613384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.624970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.624997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.636735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.636762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.648765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.648791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.660884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.660910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.672493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.672519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.684095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.684137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.695734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.695769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.707567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.707594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.718922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.718973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.730262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.730289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.742453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.742481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.754465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.754492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.766636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.766662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.778791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.778818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.790656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.790683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.802649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.802675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.814322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.814348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.708 [2024-07-15 23:40:08.825845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.708 [2024-07-15 23:40:08.825872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.837425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.837452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.849042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.849069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.860641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.860668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.872481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.872507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.883826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.883852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.895427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.895454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.907381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.907407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.919276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.919324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.930927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.930981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.943249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.943276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.954781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.954808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.966216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.966257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.977863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.977888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:08.989438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:08.989464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.001107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.001149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.012971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.012998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.025088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.025116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.037103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.037131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.049425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.049451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.061452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.061479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.073550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.073577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.966 [2024-07-15 23:40:09.085388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.966 [2024-07-15 23:40:09.085415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.096896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.096923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.108158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.108185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.119596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.119622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.131123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.131151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.142274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.142322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.155766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.155794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.166541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.166567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.178287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.178313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.190641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.190683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.202437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.202463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.214043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.214070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.225692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.225719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.237001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.237029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.248652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.248678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.260910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.260951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.271799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.271826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.283461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.283488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.295344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.295386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.306713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.306741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.318268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.318296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.329522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.329549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.224 [2024-07-15 23:40:09.341022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.224 [2024-07-15 23:40:09.341050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.352834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.352885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.364348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.364375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.375668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.375708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.387023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.387066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.398659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.398685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.410754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.410780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.421995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.422022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.433262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.433289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.444947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.444982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.456503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.456529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.468257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.468283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.479806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.479848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.491530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.491557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.503033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.503062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.514928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.514981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.526712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.526739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.538442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.538469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.550596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.550623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.562249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.562277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.576034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.576061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.586555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.586604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.482 [2024-07-15 23:40:09.598370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.482 [2024-07-15 23:40:09.598397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.610048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.610076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.622252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.622279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.634104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.634131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.645606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.645633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.657791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.657833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.669423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.669449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.681260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.681286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.692551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.692577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.704176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.704202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.716050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.716077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.729455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.729482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.740388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.740414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.752123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.752150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.764118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.764146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.776318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.776344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.788020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.739 [2024-07-15 23:40:09.788047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.739 [2024-07-15 23:40:09.800019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.800045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.740 [2024-07-15 23:40:09.811598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.811624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.740 [2024-07-15 23:40:09.823376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.823403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.740 [2024-07-15 23:40:09.835073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.835114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.740 [2024-07-15 23:40:09.847123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.847151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.740 [2024-07-15 23:40:09.858886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.740 [2024-07-15 23:40:09.858912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.871108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.871135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.883044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.883072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.894725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.894752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.906519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.906545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.917994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.918021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.929799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.929825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.941411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.941438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.953046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.953074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.964942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.964978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.976433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.976459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.988119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.988146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:09.999828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.998 [2024-07-15 23:40:09.999854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.998 [2024-07-15 23:40:10.011652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.011681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.023279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.023306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.035065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.035094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.047296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.047324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.059140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.059168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.072816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.072843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.084060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.084087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.096633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.096661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.108521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.108549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.999 [2024-07-15 23:40:10.120195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.999 [2024-07-15 23:40:10.120223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.132231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.132275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.144562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.144588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.156340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.156367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.167987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.168014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.180013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.180041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.191785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.191812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.203276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.203303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.214698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.214733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.225654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.225683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.237237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.237265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.248946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.248990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.262606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.262633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.274018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.274045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.285451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.285478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.297036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.297063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.308446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.308474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.320700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.320728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.334109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.334136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.345373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.345402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.357314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.357342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.368994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.369021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.256 [2024-07-15 23:40:10.380505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.256 [2024-07-15 23:40:10.380533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.392435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.392463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.403969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.403996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.415696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.415723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.429185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.429219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.440125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.440152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.452277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.452305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.463945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.463996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.475788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.475822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.489293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.489321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.500200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.500228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.511345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.511372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.523243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.523270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.535206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.535233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.548891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.548918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.559854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.559881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.572420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.572447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.584155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.584182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.596118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.596145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.609987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.610014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.621122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.621150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.514 [2024-07-15 23:40:10.632472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.514 [2024-07-15 23:40:10.632500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.772 [2024-07-15 23:40:10.643913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.772 [2024-07-15 23:40:10.643941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.772 [2024-07-15 23:40:10.655362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.772 [2024-07-15 23:40:10.655391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.772 [2024-07-15 23:40:10.666793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.666821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.680299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.680327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.691330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.691358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.702732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.702771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.714332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.714361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.726378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.726405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.738966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.738994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.750650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.750677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.762165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.762193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.774160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.774188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.785768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.785795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.797859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.797886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.810101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.810128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.821787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.821813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.834106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.834134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.845545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.845572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.856783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.856809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.868440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.868466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.879773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.879800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.773 [2024-07-15 23:40:10.891887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.773 [2024-07-15 23:40:10.891913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.903435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.903461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.916899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.916926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.928036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.928088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.940050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.940077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.952054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.952081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.963964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.963992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.975427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.975454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.987063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.987090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:10.998807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:10.998833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:11.010352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.031 [2024-07-15 23:40:11.010393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.031 [2024-07-15 23:40:11.022097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.022124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.033794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.033820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.045760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.045787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.057826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.057853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.069187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.069214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.080985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.081012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.092316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.092343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.104129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.104156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.115784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.115810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.127285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.127326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.138690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.138716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.032 [2024-07-15 23:40:11.151012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.032 [2024-07-15 23:40:11.151040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.162834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.162860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.174660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.174686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.186413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.186440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.198104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.198131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.209358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.209385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.221074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.221101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.233065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.233092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.246227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.246268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.257702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.257728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.269530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.269556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.281396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.281422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.292818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.292845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.304505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.304532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.316243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.316284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.327791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.327829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.339699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.339726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.351668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.351695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.363564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.363590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.375114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.375142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.386808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.386835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.398340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.398366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.410180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.410222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.421873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.421899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.313 [2024-07-15 23:40:11.433807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.313 [2024-07-15 23:40:11.433834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.447496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.447522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.458546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.458573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.469891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.469919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.481484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.481510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.493668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.493709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.505137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.505165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.516636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.516662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.528026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.571 [2024-07-15 23:40:11.528053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.571 [2024-07-15 23:40:11.539696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.539722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.551200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.551227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.562932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.562967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.576777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.576815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.588184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.588226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.599441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.599467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.611287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.611314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.623130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.623157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.635324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.635351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.647425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.647453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.659198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.659224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.670416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.670442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.682019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.682046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.572 [2024-07-15 23:40:11.693630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.572 [2024-07-15 23:40:11.693657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.705596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.705622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.717587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.717614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.729616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.729644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.741543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.741570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.753177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.753213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.764719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.764747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.776358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.776385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.788163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.788191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.799700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.799743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.811398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.811426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.822706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.822733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.834300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.834328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.845852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.845879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.857243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.857284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.869026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.869054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.882884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.882911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.893770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.830 [2024-07-15 23:40:11.893798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.830 [2024-07-15 23:40:11.904853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.831 [2024-07-15 23:40:11.904880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.831 [2024-07-15 23:40:11.916339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.831 [2024-07-15 23:40:11.916366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.831 [2024-07-15 23:40:11.928350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.831 [2024-07-15 23:40:11.928376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.831 [2024-07-15 23:40:11.940377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.831 [2024-07-15 23:40:11.940404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.831 [2024-07-15 23:40:11.952380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:36.831 [2024-07-15 23:40:11.952406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:11.964222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:11.964263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:11.976803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:11.976830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:11.988583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:11.988610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:11.999849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:11.999876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.011358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.011385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.023076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.023104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.034500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.034535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.046312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.046339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.057880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.057906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.069347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.069373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.089 [2024-07-15 23:40:12.081260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.089 [2024-07-15 23:40:12.081286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.093193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.093220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.104519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.104561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.116530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.116557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.128122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.128164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.139217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.139244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.151144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.151172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.162639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.162666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.173985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.174012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.185335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.185362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.196682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.196722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.090 [2024-07-15 23:40:12.209825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.090 [2024-07-15 23:40:12.209853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.348 [2024-07-15 23:40:12.219772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.348 [2024-07-15 23:40:12.219800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.348 [2024-07-15 23:40:12.231893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.348 [2024-07-15 23:40:12.231920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.348 [2024-07-15 23:40:12.243192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.348 [2024-07-15 23:40:12.243220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.348 [2024-07-15 23:40:12.256525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.348 [2024-07-15 23:40:12.256560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.267418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.267444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.279156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.279198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.290815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.290841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.302617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.302644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.314807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.314834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.326548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.326574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.338667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.338694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.355096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.355126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.367098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.367126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.379406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.379433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.391675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.391702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.404185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.404212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.416149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.416176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.427766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.427792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.439800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.439828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.451326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.451353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 [2024-07-15 23:40:12.463055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 23:40:12.463082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.475374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.475401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.486875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.486911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.498173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.498215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.509867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.509894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.521634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.521661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.535133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.535160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.546330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.546356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.557798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.557824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.569627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.569653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.583727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.583754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.595457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.595483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 [2024-07-15 23:40:12.606003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.607 [2024-07-15 23:40:12.606030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.607 00:13:37.607 Latency(us) 00:13:37.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.607 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:37.607 Nvme1n1 : 5.01 10814.51 84.49 0.00 0.00 11819.54 5145.79 25049.32 00:13:37.607 =================================================================================================================== 00:13:37.607 Total : 10814.51 84.49 0.00 0.00 11819.54 5145.79 25049.32 00:13:37.608 [2024-07-15 23:40:12.611992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.612030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.620024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.620049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.628039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.628060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.636083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.636133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.644097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.644145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.652118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.652162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.660152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.660200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.668167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.668214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.676185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.676232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.684202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.684248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.692233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.692280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.700257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.700308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.708276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.708327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.716292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.716340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.608 [2024-07-15 23:40:12.724311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.608 [2024-07-15 23:40:12.724357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.732334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.732375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.740371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.740414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.748344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.748379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.756363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.756383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.764381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.764400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.772401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.772421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.780483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.780530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.788495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.788544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.796503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.796536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.804486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.804506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.812506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.812526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.820545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.820566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.828563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.828590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.836627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.836673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.844646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.844691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.852619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.852639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.860642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.860662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 [2024-07-15 23:40:12.868662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.866 [2024-07-15 23:40:12.868681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3762152) - No such process 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3762152 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:37.866 delay0 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.866 23:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:37.866 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.866 [2024-07-15 23:40:12.946047] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:44.419 Initializing NVMe Controllers 00:13:44.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:44.419 Initialization complete. Launching workers. 00:13:44.419 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:13:44.419 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:13:44.419 success 209, unsuccess 174, failed 0 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.419 rmmod nvme_tcp 00:13:44.419 rmmod nvme_fabrics 00:13:44.419 rmmod nvme_keyring 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3760930 ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3760930 ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3760930' 00:13:44.419 killing process with pid 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3760930 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.419 23:40:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.954 23:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.954 00:13:46.954 real 0m27.981s 00:13:46.954 user 0m40.543s 00:13:46.954 sys 0m8.636s 00:13:46.954 23:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.954 23:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 ************************************ 00:13:46.954 END TEST nvmf_zcopy 00:13:46.954 ************************************ 00:13:46.954 23:40:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:46.954 23:40:21 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:46.954 23:40:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:46.954 23:40:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.954 23:40:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 ************************************ 00:13:46.954 START TEST nvmf_nmic 00:13:46.954 ************************************ 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:46.954 * Looking for test storage... 00:13:46.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.954 23:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:48.854 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:48.854 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:48.854 Found net devices under 0000:09:00.0: cvl_0_0 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:48.854 Found net devices under 0000:09:00.1: cvl_0_1 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:48.854 00:13:48.854 --- 10.0.0.2 ping statistics --- 00:13:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.854 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:48.854 00:13:48.854 --- 10.0.0.1 ping statistics --- 00:13:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.854 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3765534 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3765534 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3765534 ']' 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.854 23:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:48.854 [2024-07-15 23:40:23.933381] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:48.854 [2024-07-15 23:40:23.933461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.854 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.111 [2024-07-15 23:40:23.995127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.111 [2024-07-15 23:40:24.099080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.111 [2024-07-15 23:40:24.099137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.111 [2024-07-15 23:40:24.099164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.111 [2024-07-15 23:40:24.099175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.111 [2024-07-15 23:40:24.099184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.111 [2024-07-15 23:40:24.099265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.111 [2024-07-15 23:40:24.099304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.111 [2024-07-15 23:40:24.099360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.111 [2024-07-15 23:40:24.099363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.111 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.111 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:49.111 23:40:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.111 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.111 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.374 [2024-07-15 23:40:24.255801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.374 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 Malloc0 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 [2024-07-15 23:40:24.308433] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:49.375 test case1: single bdev can't be used in multiple subsystems 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 [2024-07-15 23:40:24.332288] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:49.375 [2024-07-15 23:40:24.332332] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:49.375 [2024-07-15 23:40:24.332347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.375 request: 00:13:49.375 { 00:13:49.375 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:49.375 "namespace": { 00:13:49.375 "bdev_name": "Malloc0", 00:13:49.375 "no_auto_visible": false 00:13:49.375 }, 00:13:49.375 "method": "nvmf_subsystem_add_ns", 00:13:49.375 "req_id": 1 00:13:49.375 } 00:13:49.375 Got JSON-RPC error response 00:13:49.375 response: 00:13:49.375 { 00:13:49.375 "code": -32602, 00:13:49.375 "message": "Invalid parameters" 00:13:49.375 } 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:49.375 Adding namespace failed - expected result. 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:49.375 test case2: host connect to nvmf target in multiple paths 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 [2024-07-15 23:40:24.340399] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.375 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.959 23:40:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:50.522 23:40:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.522 23:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.522 23:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.522 23:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:50.522 23:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:53.045 23:40:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:53.045 [global] 00:13:53.045 thread=1 00:13:53.045 invalidate=1 00:13:53.045 rw=write 00:13:53.045 time_based=1 00:13:53.045 runtime=1 00:13:53.045 ioengine=libaio 00:13:53.045 direct=1 00:13:53.045 bs=4096 00:13:53.045 iodepth=1 00:13:53.045 norandommap=0 00:13:53.045 numjobs=1 00:13:53.045 00:13:53.045 verify_dump=1 00:13:53.045 verify_backlog=512 00:13:53.045 verify_state_save=0 00:13:53.045 do_verify=1 00:13:53.045 verify=crc32c-intel 00:13:53.045 [job0] 00:13:53.045 filename=/dev/nvme0n1 00:13:53.045 Could not set queue depth (nvme0n1) 00:13:53.045 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.045 fio-3.35 00:13:53.045 Starting 1 thread 00:13:53.977 00:13:53.977 job0: (groupid=0, jobs=1): err= 0: pid=3766170: Mon Jul 15 23:40:28 2024 00:13:53.977 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:13:53.977 slat (nsec): min=6470, max=32861, avg=21698.26, stdev=10083.66 00:13:53.977 clat (usec): min=40346, max=42035, avg=41422.89, stdev=558.62 00:13:53.977 lat (usec): min=40353, max=42068, avg=41444.59, stdev=564.56 00:13:53.977 clat percentiles (usec): 00:13:53.977 | 1.00th=[40109], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:53.977 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:53.977 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:53.977 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:53.977 | 99.99th=[42206] 00:13:53.977 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:53.977 slat (nsec): min=5437, max=31936, avg=11031.64, stdev=6385.00 00:13:53.977 clat (usec): min=130, max=416, avg=152.41, stdev=16.89 00:13:53.977 lat (usec): min=137, max=441, avg=163.44, stdev=19.51 00:13:53.977 clat percentiles (usec): 00:13:53.977 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:13:53.977 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:13:53.977 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 172], 00:13:53.977 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 416], 99.95th=[ 416], 00:13:53.977 | 99.99th=[ 416] 00:13:53.977 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:53.977 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:53.977 lat (usec) : 250=95.51%, 500=0.19% 00:13:53.977 lat (msec) : 50=4.30% 00:13:53.977 cpu : usr=0.29%, sys=0.48%, ctx=535, majf=0, minf=2 00:13:53.977 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.977 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.977 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.977 00:13:53.977 Run status group 0 (all jobs): 00:13:53.977 READ: bw=88.5KiB/s (90.7kB/s), 88.5KiB/s-88.5KiB/s (90.7kB/s-90.7kB/s), io=92.0KiB (94.2kB), run=1039-1039msec 00:13:53.977 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:13:53.977 00:13:53.977 Disk stats (read/write): 00:13:53.977 nvme0n1: ios=69/512, merge=0/0, ticks=817/73, in_queue=890, util=91.58% 00:13:53.977 23:40:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.234 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.235 rmmod nvme_tcp 00:13:54.235 rmmod nvme_fabrics 00:13:54.235 rmmod nvme_keyring 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3765534 ']' 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3765534 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3765534 ']' 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3765534 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3765534 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3765534' 00:13:54.235 killing process with pid 3765534 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3765534 00:13:54.235 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3765534 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.493 23:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.025 23:40:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.025 00:13:57.025 real 0m9.972s 00:13:57.025 user 0m22.609s 00:13:57.025 sys 0m2.284s 00:13:57.025 23:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.025 23:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:57.025 ************************************ 00:13:57.025 END TEST nvmf_nmic 00:13:57.025 ************************************ 00:13:57.025 23:40:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:57.025 23:40:31 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:57.025 23:40:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:57.025 23:40:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.025 23:40:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.025 ************************************ 00:13:57.025 START TEST nvmf_fio_target 00:13:57.025 ************************************ 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:57.025 * Looking for test storage... 00:13:57.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.025 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.026 23:40:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:58.924 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:58.924 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:58.924 Found net devices under 0000:09:00.0: cvl_0_0 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:58.924 Found net devices under 0000:09:00.1: cvl_0_1 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:58.924 00:13:58.924 --- 10.0.0.2 ping statistics --- 00:13:58.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.924 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:13:58.924 00:13:58.924 --- 10.0.0.1 ping statistics --- 00:13:58.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.924 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3768250 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3768250 00:13:58.924 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3768250 ']' 00:13:58.925 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.925 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.925 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.925 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.925 23:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.925 [2024-07-15 23:40:33.958543] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:13:58.925 [2024-07-15 23:40:33.958610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.925 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.925 [2024-07-15 23:40:34.026456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.183 [2024-07-15 23:40:34.134282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.183 [2024-07-15 23:40:34.134335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.183 [2024-07-15 23:40:34.134363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.183 [2024-07-15 23:40:34.134374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.183 [2024-07-15 23:40:34.134383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.183 [2024-07-15 23:40:34.137976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.183 [2024-07-15 23:40:34.138039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.183 [2024-07-15 23:40:34.138106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.183 [2024-07-15 23:40:34.138110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.183 23:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.440 [2024-07-15 23:40:34.506524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.440 23:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.697 23:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:59.697 23:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.965 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:59.965 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:00.529 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:00.529 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:00.529 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:00.529 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:00.785 23:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.042 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:01.042 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.299 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:01.299 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.556 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:01.556 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:01.813 23:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.070 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:02.070 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.327 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:02.327 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.585 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.842 [2024-07-15 23:40:37.855544] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.842 23:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:03.098 23:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:03.355 23:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:03.920 23:40:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:06.444 23:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:06.444 [global] 00:14:06.444 thread=1 00:14:06.444 invalidate=1 00:14:06.444 rw=write 00:14:06.444 time_based=1 00:14:06.444 runtime=1 00:14:06.444 ioengine=libaio 00:14:06.444 direct=1 00:14:06.444 bs=4096 00:14:06.444 iodepth=1 00:14:06.444 norandommap=0 00:14:06.444 numjobs=1 00:14:06.444 00:14:06.444 verify_dump=1 00:14:06.444 verify_backlog=512 00:14:06.444 verify_state_save=0 00:14:06.444 do_verify=1 00:14:06.444 verify=crc32c-intel 00:14:06.444 [job0] 00:14:06.444 filename=/dev/nvme0n1 00:14:06.444 [job1] 00:14:06.444 filename=/dev/nvme0n2 00:14:06.444 [job2] 00:14:06.444 filename=/dev/nvme0n3 00:14:06.444 [job3] 00:14:06.444 filename=/dev/nvme0n4 00:14:06.444 Could not set queue depth (nvme0n1) 00:14:06.444 Could not set queue depth (nvme0n2) 00:14:06.444 Could not set queue depth (nvme0n3) 00:14:06.444 Could not set queue depth (nvme0n4) 00:14:06.444 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.444 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.444 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.444 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:06.444 fio-3.35 00:14:06.444 Starting 4 threads 00:14:07.375 00:14:07.375 job0: (groupid=0, jobs=1): err= 0: pid=3769199: Mon Jul 15 23:40:42 2024 00:14:07.375 read: IOPS=798, BW=3193KiB/s (3269kB/s)(3196KiB/1001msec) 00:14:07.375 slat (nsec): min=5436, max=47911, avg=16062.48, stdev=6684.34 00:14:07.375 clat (usec): min=185, max=41977, avg=960.36, stdev=5357.28 00:14:07.375 lat (usec): min=196, max=42009, avg=976.42, stdev=5358.71 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:14:07.375 | 30.00th=[ 215], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:14:07.375 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 318], 00:14:07.375 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:14:07.375 | 99.99th=[42206] 00:14:07.375 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:07.375 slat (nsec): min=5987, max=40221, avg=13066.17, stdev=6187.76 00:14:07.375 clat (usec): min=146, max=504, avg=194.19, stdev=44.74 00:14:07.375 lat (usec): min=163, max=513, avg=207.25, stdev=42.87 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:14:07.375 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 178], 60.00th=[ 208], 00:14:07.375 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 249], 00:14:07.375 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 506], 00:14:07.375 | 99.99th=[ 506] 00:14:07.375 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:14:07.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:07.375 lat (usec) : 250=77.67%, 500=21.34%, 750=0.16% 00:14:07.375 lat (msec) : 2=0.05%, 50=0.77% 00:14:07.375 cpu : usr=1.30%, sys=2.90%, ctx=1823, majf=0, minf=1 00:14:07.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 issued rwts: total=799,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.375 job1: (groupid=0, jobs=1): err= 0: pid=3769200: Mon Jul 15 23:40:42 2024 00:14:07.375 read: IOPS=33, BW=135KiB/s (138kB/s)(140KiB/1039msec) 00:14:07.375 slat (nsec): min=5887, max=38874, avg=22245.69, stdev=13802.92 00:14:07.375 clat (usec): min=225, max=42025, avg=26228.21, stdev=20267.14 00:14:07.375 lat (usec): min=232, max=42062, avg=26250.45, stdev=20279.51 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 227], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:14:07.375 | 30.00th=[ 269], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:14:07.375 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:07.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:07.375 | 99.99th=[42206] 00:14:07.375 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:14:07.375 slat (nsec): min=7201, max=37950, avg=9468.12, stdev=3091.79 00:14:07.375 clat (usec): min=157, max=399, avg=220.30, stdev=33.75 00:14:07.375 lat (usec): min=165, max=414, avg=229.77, stdev=34.01 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:14:07.375 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 233], 00:14:07.375 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 269], 00:14:07.375 | 99.00th=[ 343], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 400], 00:14:07.375 | 99.99th=[ 400] 00:14:07.375 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:14:07.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:07.375 lat (usec) : 250=86.29%, 500=9.69% 00:14:07.375 lat (msec) : 50=4.02% 00:14:07.375 cpu : usr=0.19%, sys=0.87%, ctx=549, majf=0, minf=1 00:14:07.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.375 job2: (groupid=0, jobs=1): err= 0: pid=3769201: Mon Jul 15 23:40:42 2024 00:14:07.375 read: IOPS=25, BW=100KiB/s (103kB/s)(104KiB/1035msec) 00:14:07.375 slat (nsec): min=6799, max=48773, avg=27934.77, stdev=10277.28 00:14:07.375 clat (usec): min=227, max=42045, avg=35304.30, stdev=15225.56 00:14:07.375 lat (usec): min=243, max=42078, avg=35332.24, stdev=15229.59 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 229], 5.00th=[ 265], 10.00th=[ 330], 20.00th=[41157], 00:14:07.375 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:14:07.375 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:07.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:07.375 | 99.99th=[42206] 00:14:07.375 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:14:07.375 slat (nsec): min=5875, max=38621, avg=8004.67, stdev=3099.22 00:14:07.375 clat (usec): min=158, max=385, avg=216.22, stdev=38.90 00:14:07.375 lat (usec): min=164, max=391, avg=224.22, stdev=39.37 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 182], 00:14:07.375 | 30.00th=[ 192], 40.00th=[ 202], 50.00th=[ 212], 60.00th=[ 219], 00:14:07.375 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 277], 95.00th=[ 293], 00:14:07.375 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 388], 99.95th=[ 388], 00:14:07.375 | 99.99th=[ 388] 00:14:07.375 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:14:07.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:07.375 lat (usec) : 250=79.93%, 500=15.99% 00:14:07.375 lat (msec) : 50=4.09% 00:14:07.375 cpu : usr=0.10%, sys=0.48%, ctx=539, majf=0, minf=2 00:14:07.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.375 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.375 job3: (groupid=0, jobs=1): err= 0: pid=3769202: Mon Jul 15 23:40:42 2024 00:14:07.375 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:14:07.375 slat (nsec): min=6201, max=35146, avg=31130.90, stdev=7930.45 00:14:07.375 clat (usec): min=40920, max=42015, avg=41738.27, stdev=410.31 00:14:07.375 lat (usec): min=40954, max=42049, avg=41769.40, stdev=412.45 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:14:07.375 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:14:07.375 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:07.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:07.375 | 99.99th=[42206] 00:14:07.375 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:14:07.375 slat (nsec): min=6435, max=26753, avg=8177.64, stdev=2677.88 00:14:07.375 clat (usec): min=145, max=445, avg=232.36, stdev=35.86 00:14:07.375 lat (usec): min=154, max=459, avg=240.54, stdev=36.56 00:14:07.375 clat percentiles (usec): 00:14:07.375 | 1.00th=[ 157], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 210], 00:14:07.375 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 235], 00:14:07.375 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 302], 00:14:07.375 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 445], 99.95th=[ 445], 00:14:07.375 | 99.99th=[ 445] 00:14:07.375 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:14:07.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:07.376 lat (usec) : 250=82.55%, 500=13.51% 00:14:07.376 lat (msec) : 50=3.94% 00:14:07.376 cpu : usr=0.40%, sys=0.30%, ctx=535, majf=0, minf=1 00:14:07.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.376 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.376 00:14:07.376 Run status group 0 (all jobs): 00:14:07.376 READ: bw=3392KiB/s (3473kB/s), 83.8KiB/s-3193KiB/s (85.8kB/s-3269kB/s), io=3524KiB (3609kB), run=1001-1039msec 00:14:07.376 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1039msec 00:14:07.376 00:14:07.376 Disk stats (read/write): 00:14:07.376 nvme0n1: ios=562/643, merge=0/0, ticks=830/132, in_queue=962, util=90.48% 00:14:07.376 nvme0n2: ios=55/512, merge=0/0, ticks=1670/113, in_queue=1783, util=98.07% 00:14:07.376 nvme0n3: ios=21/512, merge=0/0, ticks=709/109, in_queue=818, util=88.91% 00:14:07.376 nvme0n4: ios=74/512, merge=0/0, ticks=1628/114, in_queue=1742, util=98.10% 00:14:07.376 23:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:07.376 [global] 00:14:07.376 thread=1 00:14:07.376 invalidate=1 00:14:07.376 rw=randwrite 00:14:07.376 time_based=1 00:14:07.376 runtime=1 00:14:07.376 ioengine=libaio 00:14:07.376 direct=1 00:14:07.376 bs=4096 00:14:07.376 iodepth=1 00:14:07.376 norandommap=0 00:14:07.376 numjobs=1 00:14:07.376 00:14:07.376 verify_dump=1 00:14:07.376 verify_backlog=512 00:14:07.376 verify_state_save=0 00:14:07.376 do_verify=1 00:14:07.376 verify=crc32c-intel 00:14:07.376 [job0] 00:14:07.376 filename=/dev/nvme0n1 00:14:07.376 [job1] 00:14:07.376 filename=/dev/nvme0n2 00:14:07.376 [job2] 00:14:07.376 filename=/dev/nvme0n3 00:14:07.376 [job3] 00:14:07.376 filename=/dev/nvme0n4 00:14:07.376 Could not set queue depth (nvme0n1) 00:14:07.376 Could not set queue depth (nvme0n2) 00:14:07.376 Could not set queue depth (nvme0n3) 00:14:07.376 Could not set queue depth (nvme0n4) 00:14:07.631 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.631 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.631 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.631 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.631 fio-3.35 00:14:07.631 Starting 4 threads 00:14:09.063 00:14:09.063 job0: (groupid=0, jobs=1): err= 0: pid=3769548: Mon Jul 15 23:40:43 2024 00:14:09.063 read: IOPS=22, BW=91.2KiB/s (93.4kB/s)(92.0KiB/1009msec) 00:14:09.063 slat (nsec): min=5698, max=32109, avg=13834.57, stdev=4886.59 00:14:09.063 clat (usec): min=407, max=42217, avg=39275.13, stdev=9069.22 00:14:09.063 lat (usec): min=424, max=42225, avg=39288.96, stdev=9068.28 00:14:09.063 clat percentiles (usec): 00:14:09.063 | 1.00th=[ 408], 5.00th=[26346], 10.00th=[40633], 20.00th=[41157], 00:14:09.063 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:09.063 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:09.063 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:09.063 | 99.99th=[42206] 00:14:09.063 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:14:09.063 slat (nsec): min=5441, max=48712, avg=6886.04, stdev=2496.06 00:14:09.063 clat (usec): min=160, max=254, avg=195.66, stdev=15.27 00:14:09.063 lat (usec): min=166, max=294, avg=202.55, stdev=15.77 00:14:09.063 clat percentiles (usec): 00:14:09.063 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:14:09.063 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:14:09.063 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 223], 00:14:09.063 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 255], 00:14:09.063 | 99.99th=[ 255] 00:14:09.063 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:14:09.063 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:09.063 lat (usec) : 250=95.51%, 500=0.37% 00:14:09.063 lat (msec) : 50=4.11% 00:14:09.063 cpu : usr=0.10%, sys=0.40%, ctx=536, majf=0, minf=1 00:14:09.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:09.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.063 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:09.063 job1: (groupid=0, jobs=1): err= 0: pid=3769549: Mon Jul 15 23:40:43 2024 00:14:09.063 read: IOPS=1525, BW=6101KiB/s (6247kB/s)(6168KiB/1011msec) 00:14:09.063 slat (nsec): min=4481, max=30378, avg=11114.54, stdev=4104.85 00:14:09.063 clat (usec): min=223, max=41026, avg=390.86, stdev=2335.07 00:14:09.063 lat (usec): min=229, max=41043, avg=401.98, stdev=2335.27 00:14:09.063 clat percentiles (usec): 00:14:09.063 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:14:09.063 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:14:09.063 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:14:09.063 | 99.00th=[ 334], 99.50th=[ 449], 99.90th=[41157], 99.95th=[41157], 00:14:09.063 | 99.99th=[41157] 00:14:09.063 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:14:09.063 slat (nsec): min=6000, max=49477, avg=11676.15, stdev=5392.48 00:14:09.063 clat (usec): min=137, max=316, avg=173.22, stdev=22.30 00:14:09.063 lat (usec): min=144, max=324, avg=184.90, stdev=21.45 00:14:09.063 clat percentiles (usec): 00:14:09.063 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:14:09.063 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 176], 00:14:09.063 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:14:09.063 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 277], 00:14:09.063 | 99.99th=[ 318] 00:14:09.063 bw ( KiB/s): min= 6464, max= 9920, per=59.03%, avg=8192.00, stdev=2443.76, samples=2 00:14:09.063 iops : min= 1616, max= 2480, avg=2048.00, stdev=610.94, samples=2 00:14:09.063 lat (usec) : 250=83.70%, 500=16.10% 00:14:09.063 lat (msec) : 4=0.03%, 20=0.03%, 50=0.14% 00:14:09.063 cpu : usr=2.57%, sys=3.86%, ctx=3591, majf=0, minf=1 00:14:09.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:09.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.063 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:09.063 job2: (groupid=0, jobs=1): err= 0: pid=3769556: Mon Jul 15 23:40:43 2024 00:14:09.063 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:14:09.063 slat (nsec): min=12372, max=27456, avg=15621.96, stdev=3836.50 00:14:09.063 clat (usec): min=315, max=41995, avg=38216.01, stdev=11674.27 00:14:09.063 lat (usec): min=329, max=42013, avg=38231.63, stdev=11675.01 00:14:09.063 clat percentiles (usec): 00:14:09.063 | 1.00th=[ 314], 5.00th=[ 359], 10.00th=[41157], 20.00th=[41157], 00:14:09.063 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:14:09.063 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:09.063 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:09.063 | 99.99th=[42206] 00:14:09.063 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:14:09.063 slat (nsec): min=6959, max=48046, avg=8838.54, stdev=3703.03 00:14:09.064 clat (usec): min=176, max=363, avg=212.52, stdev=17.59 00:14:09.064 lat (usec): min=184, max=372, avg=221.36, stdev=18.15 00:14:09.064 clat percentiles (usec): 00:14:09.064 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:14:09.064 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:14:09.064 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:14:09.064 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 363], 99.95th=[ 363], 00:14:09.064 | 99.99th=[ 363] 00:14:09.064 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:14:09.064 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:09.064 lat (usec) : 250=93.66%, 500=2.24% 00:14:09.064 lat (msec) : 50=4.10% 00:14:09.064 cpu : usr=0.58%, sys=0.29%, ctx=537, majf=0, minf=1 00:14:09.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:09.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.064 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:09.064 job3: (groupid=0, jobs=1): err= 0: pid=3769557: Mon Jul 15 23:40:43 2024 00:14:09.064 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:14:09.064 slat (nsec): min=7516, max=28492, avg=11585.87, stdev=4353.19 00:14:09.064 clat (usec): min=248, max=42026, avg=40007.04, stdev=8674.59 00:14:09.064 lat (usec): min=258, max=42036, avg=40018.63, stdev=8674.97 00:14:09.064 clat percentiles (usec): 00:14:09.064 | 1.00th=[ 249], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:09.064 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:09.064 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:09.064 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:09.064 | 99.99th=[42206] 00:14:09.064 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:14:09.064 slat (nsec): min=8209, max=48602, avg=9649.24, stdev=2696.06 00:14:09.064 clat (usec): min=155, max=241, avg=190.05, stdev=16.38 00:14:09.064 lat (usec): min=164, max=261, avg=199.70, stdev=16.88 00:14:09.064 clat percentiles (usec): 00:14:09.064 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:14:09.064 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:14:09.064 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:14:09.064 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 241], 99.95th=[ 241], 00:14:09.064 | 99.99th=[ 241] 00:14:09.064 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:14:09.064 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:09.064 lat (usec) : 250=95.89% 00:14:09.064 lat (msec) : 50=4.11% 00:14:09.064 cpu : usr=0.20%, sys=0.78%, ctx=537, majf=0, minf=2 00:14:09.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:09.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.064 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:09.064 00:14:09.064 Run status group 0 (all jobs): 00:14:09.064 READ: bw=6242KiB/s (6392kB/s), 89.8KiB/s-6101KiB/s (91.9kB/s-6247kB/s), io=6448KiB (6603kB), run=1009-1033msec 00:14:09.064 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-8103KiB/s (2030kB/s-8297kB/s), io=14.0MiB (14.7MB), run=1009-1033msec 00:14:09.064 00:14:09.064 Disk stats (read/write): 00:14:09.064 nvme0n1: ios=63/512, merge=0/0, ticks=853/91, in_queue=944, util=94.79% 00:14:09.064 nvme0n2: ios=1586/2048, merge=0/0, ticks=424/347, in_queue=771, util=87.60% 00:14:09.064 nvme0n3: ios=76/512, merge=0/0, ticks=788/97, in_queue=885, util=90.38% 00:14:09.064 nvme0n4: ios=68/512, merge=0/0, ticks=1154/91, in_queue=1245, util=98.52% 00:14:09.064 23:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:09.064 [global] 00:14:09.064 thread=1 00:14:09.064 invalidate=1 00:14:09.064 rw=write 00:14:09.064 time_based=1 00:14:09.064 runtime=1 00:14:09.064 ioengine=libaio 00:14:09.064 direct=1 00:14:09.064 bs=4096 00:14:09.064 iodepth=128 00:14:09.064 norandommap=0 00:14:09.064 numjobs=1 00:14:09.064 00:14:09.064 verify_dump=1 00:14:09.064 verify_backlog=512 00:14:09.064 verify_state_save=0 00:14:09.064 do_verify=1 00:14:09.064 verify=crc32c-intel 00:14:09.064 [job0] 00:14:09.064 filename=/dev/nvme0n1 00:14:09.064 [job1] 00:14:09.064 filename=/dev/nvme0n2 00:14:09.064 [job2] 00:14:09.064 filename=/dev/nvme0n3 00:14:09.064 [job3] 00:14:09.064 filename=/dev/nvme0n4 00:14:09.064 Could not set queue depth (nvme0n1) 00:14:09.064 Could not set queue depth (nvme0n2) 00:14:09.064 Could not set queue depth (nvme0n3) 00:14:09.064 Could not set queue depth (nvme0n4) 00:14:09.064 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.064 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.064 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.064 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.064 fio-3.35 00:14:09.064 Starting 4 threads 00:14:10.443 00:14:10.443 job0: (groupid=0, jobs=1): err= 0: pid=3769784: Mon Jul 15 23:40:45 2024 00:14:10.443 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:14:10.443 slat (usec): min=2, max=11504, avg=115.29, stdev=771.61 00:14:10.443 clat (usec): min=7583, max=49506, avg=15046.56, stdev=4897.11 00:14:10.443 lat (usec): min=7862, max=49515, avg=15161.85, stdev=4980.05 00:14:10.443 clat percentiles (usec): 00:14:10.443 | 1.00th=[ 8225], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11207], 00:14:10.443 | 30.00th=[11863], 40.00th=[12911], 50.00th=[14091], 60.00th=[15139], 00:14:10.443 | 70.00th=[17171], 80.00th=[18482], 90.00th=[21103], 95.00th=[24511], 00:14:10.443 | 99.00th=[28181], 99.50th=[38011], 99.90th=[49546], 99.95th=[49546], 00:14:10.443 | 99.99th=[49546] 00:14:10.443 write: IOPS=3744, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1006msec); 0 zone resets 00:14:10.443 slat (usec): min=4, max=10614, avg=143.68, stdev=783.36 00:14:10.443 clat (usec): min=1079, max=90062, avg=19563.07, stdev=14753.83 00:14:10.443 lat (usec): min=1089, max=90069, avg=19706.75, stdev=14827.94 00:14:10.443 clat percentiles (usec): 00:14:10.443 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[11600], 00:14:10.443 | 30.00th=[12387], 40.00th=[13173], 50.00th=[14091], 60.00th=[17433], 00:14:10.443 | 70.00th=[19006], 80.00th=[23200], 90.00th=[31589], 95.00th=[56361], 00:14:10.443 | 99.00th=[85459], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:14:10.443 | 99.99th=[89654] 00:14:10.443 bw ( KiB/s): min=13256, max=15864, per=24.45%, avg=14560.00, stdev=1844.13, samples=2 00:14:10.443 iops : min= 3314, max= 3966, avg=3640.00, stdev=461.03, samples=2 00:14:10.443 lat (msec) : 2=0.26%, 10=12.38%, 20=68.24%, 50=15.92%, 100=3.21% 00:14:10.443 cpu : usr=6.27%, sys=8.36%, ctx=317, majf=0, minf=1 00:14:10.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:14:10.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.443 issued rwts: total=3584,3767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.444 job1: (groupid=0, jobs=1): err= 0: pid=3769785: Mon Jul 15 23:40:45 2024 00:14:10.444 read: IOPS=3581, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1009msec) 00:14:10.444 slat (usec): min=2, max=16245, avg=137.60, stdev=919.85 00:14:10.444 clat (msec): min=5, max=100, avg=16.58, stdev=12.83 00:14:10.444 lat (msec): min=5, max=100, avg=16.72, stdev=12.95 00:14:10.444 clat percentiles (msec): 00:14:10.444 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:14:10.444 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:14:10.444 | 70.00th=[ 14], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 41], 00:14:10.444 | 99.00th=[ 83], 99.50th=[ 92], 99.90th=[ 101], 99.95th=[ 101], 00:14:10.444 | 99.99th=[ 101] 00:14:10.444 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:14:10.444 slat (usec): min=3, max=11180, avg=109.18, stdev=629.06 00:14:10.444 clat (msec): min=2, max=100, avg=16.60, stdev=13.18 00:14:10.444 lat (msec): min=2, max=100, avg=16.71, stdev=13.23 00:14:10.444 clat percentiles (msec): 00:14:10.444 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 10], 00:14:10.444 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:14:10.444 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 34], 95.00th=[ 47], 00:14:10.444 | 99.00th=[ 78], 99.50th=[ 82], 99.90th=[ 86], 99.95th=[ 86], 00:14:10.444 | 99.99th=[ 101] 00:14:10.444 bw ( KiB/s): min=12336, max=19656, per=26.86%, avg=15996.00, stdev=5176.02, samples=2 00:14:10.444 iops : min= 3084, max= 4914, avg=3999.00, stdev=1294.01, samples=2 00:14:10.444 lat (msec) : 4=0.56%, 10=23.28%, 20=54.36%, 50=18.35%, 100=3.36% 00:14:10.444 lat (msec) : 250=0.09% 00:14:10.444 cpu : usr=4.86%, sys=8.73%, ctx=325, majf=0, minf=1 00:14:10.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:10.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.444 issued rwts: total=3614,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.444 job2: (groupid=0, jobs=1): err= 0: pid=3769787: Mon Jul 15 23:40:45 2024 00:14:10.444 read: IOPS=3600, BW=14.1MiB/s (14.7MB/s)(14.7MiB/1044msec) 00:14:10.444 slat (usec): min=2, max=18311, avg=130.49, stdev=965.58 00:14:10.444 clat (usec): min=4239, max=61983, avg=18583.42, stdev=9813.97 00:14:10.444 lat (usec): min=4245, max=61990, avg=18713.91, stdev=9862.77 00:14:10.444 clat percentiles (usec): 00:14:10.444 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[11600], 00:14:10.444 | 30.00th=[11994], 40.00th=[14484], 50.00th=[15270], 60.00th=[16581], 00:14:10.444 | 70.00th=[20055], 80.00th=[25035], 90.00th=[30802], 95.00th=[34866], 00:14:10.444 | 99.00th=[56361], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:14:10.444 | 99.99th=[62129] 00:14:10.444 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:14:10.444 slat (usec): min=3, max=19618, avg=108.40, stdev=792.11 00:14:10.444 clat (usec): min=284, max=37266, avg=15297.71, stdev=6173.94 00:14:10.444 lat (usec): min=320, max=37278, avg=15406.11, stdev=6245.46 00:14:10.444 clat percentiles (usec): 00:14:10.444 | 1.00th=[ 2024], 5.00th=[ 5735], 10.00th=[ 9372], 20.00th=[10421], 00:14:10.444 | 30.00th=[11207], 40.00th=[12780], 50.00th=[13698], 60.00th=[16581], 00:14:10.444 | 70.00th=[18482], 80.00th=[21627], 90.00th=[22676], 95.00th=[25822], 00:14:10.444 | 99.00th=[33162], 99.50th=[33424], 99.90th=[34866], 99.95th=[36439], 00:14:10.444 | 99.99th=[37487] 00:14:10.444 bw ( KiB/s): min=16384, max=16384, per=27.51%, avg=16384.00, stdev= 0.00, samples=2 00:14:10.444 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:10.444 lat (usec) : 500=0.03%, 750=0.08%, 1000=0.13% 00:14:10.444 lat (msec) : 2=0.28%, 4=0.97%, 10=9.74%, 20=61.35%, 50=26.44% 00:14:10.444 lat (msec) : 100=0.99% 00:14:10.444 cpu : usr=3.45%, sys=5.47%, ctx=334, majf=0, minf=1 00:14:10.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:10.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.444 issued rwts: total=3759,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.444 job3: (groupid=0, jobs=1): err= 0: pid=3769788: Mon Jul 15 23:40:45 2024 00:14:10.444 read: IOPS=3459, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1003msec) 00:14:10.444 slat (usec): min=2, max=21627, avg=132.34, stdev=920.59 00:14:10.444 clat (usec): min=2608, max=76213, avg=17295.50, stdev=9511.37 00:14:10.444 lat (usec): min=2622, max=76218, avg=17427.84, stdev=9574.44 00:14:10.444 clat percentiles (usec): 00:14:10.444 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[11994], 20.00th=[12911], 00:14:10.444 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14746], 00:14:10.444 | 70.00th=[15795], 80.00th=[17433], 90.00th=[29230], 95.00th=[36963], 00:14:10.444 | 99.00th=[59507], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:14:10.444 | 99.99th=[76022] 00:14:10.444 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:14:10.444 slat (usec): min=3, max=30311, avg=139.73, stdev=1104.36 00:14:10.444 clat (usec): min=4068, max=68229, avg=18717.06, stdev=9687.27 00:14:10.444 lat (usec): min=4079, max=68295, avg=18856.79, stdev=9765.33 00:14:10.444 clat percentiles (usec): 00:14:10.444 | 1.00th=[ 7963], 5.00th=[ 9634], 10.00th=[12518], 20.00th=[13304], 00:14:10.444 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[15533], 00:14:10.444 | 70.00th=[21103], 80.00th=[22676], 90.00th=[29754], 95.00th=[37487], 00:14:10.444 | 99.00th=[59507], 99.50th=[59507], 99.90th=[60031], 99.95th=[64226], 00:14:10.444 | 99.99th=[68682] 00:14:10.444 bw ( KiB/s): min=12976, max=15696, per=24.07%, avg=14336.00, stdev=1923.33, samples=2 00:14:10.444 iops : min= 3244, max= 3924, avg=3584.00, stdev=480.83, samples=2 00:14:10.444 lat (msec) : 4=0.34%, 10=4.41%, 20=71.17%, 50=21.17%, 100=2.92% 00:14:10.444 cpu : usr=4.49%, sys=8.38%, ctx=322, majf=0, minf=1 00:14:10.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:10.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:10.444 issued rwts: total=3470,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.444 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:10.444 00:14:10.444 Run status group 0 (all jobs): 00:14:10.444 READ: bw=54.0MiB/s (56.6MB/s), 13.5MiB/s-14.1MiB/s (14.2MB/s-14.7MB/s), io=56.4MiB (59.1MB), run=1003-1044msec 00:14:10.444 WRITE: bw=58.2MiB/s (61.0MB/s), 14.0MiB/s-15.9MiB/s (14.6MB/s-16.6MB/s), io=60.7MiB (63.7MB), run=1003-1044msec 00:14:10.444 00:14:10.444 Disk stats (read/write): 00:14:10.444 nvme0n1: ios=3113/3267, merge=0/0, ticks=29975/47979, in_queue=77954, util=86.27% 00:14:10.444 nvme0n2: ios=2667/3072, merge=0/0, ticks=39936/47069, in_queue=87005, util=97.87% 00:14:10.444 nvme0n3: ios=3394/3584, merge=0/0, ticks=49286/51067, in_queue=100353, util=88.61% 00:14:10.444 nvme0n4: ios=2645/3072, merge=0/0, ticks=31808/40046, in_queue=71854, util=97.26% 00:14:10.444 23:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:10.444 [global] 00:14:10.444 thread=1 00:14:10.444 invalidate=1 00:14:10.444 rw=randwrite 00:14:10.444 time_based=1 00:14:10.444 runtime=1 00:14:10.444 ioengine=libaio 00:14:10.444 direct=1 00:14:10.444 bs=4096 00:14:10.444 iodepth=128 00:14:10.444 norandommap=0 00:14:10.444 numjobs=1 00:14:10.444 00:14:10.444 verify_dump=1 00:14:10.444 verify_backlog=512 00:14:10.444 verify_state_save=0 00:14:10.444 do_verify=1 00:14:10.444 verify=crc32c-intel 00:14:10.444 [job0] 00:14:10.444 filename=/dev/nvme0n1 00:14:10.444 [job1] 00:14:10.444 filename=/dev/nvme0n2 00:14:10.444 [job2] 00:14:10.444 filename=/dev/nvme0n3 00:14:10.444 [job3] 00:14:10.444 filename=/dev/nvme0n4 00:14:10.444 Could not set queue depth (nvme0n1) 00:14:10.444 Could not set queue depth (nvme0n2) 00:14:10.444 Could not set queue depth (nvme0n3) 00:14:10.444 Could not set queue depth (nvme0n4) 00:14:10.702 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.702 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.702 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.702 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.702 fio-3.35 00:14:10.702 Starting 4 threads 00:14:12.081 00:14:12.081 job0: (groupid=0, jobs=1): err= 0: pid=3770013: Mon Jul 15 23:40:46 2024 00:14:12.081 read: IOPS=4013, BW=15.7MiB/s (16.4MB/s)(15.9MiB/1011msec) 00:14:12.081 slat (usec): min=2, max=17003, avg=121.03, stdev=839.86 00:14:12.081 clat (usec): min=2668, max=50594, avg=14617.92, stdev=7510.07 00:14:12.081 lat (usec): min=2688, max=50601, avg=14738.95, stdev=7563.55 00:14:12.081 clat percentiles (usec): 00:14:12.081 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10421], 00:14:12.081 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[13304], 00:14:12.081 | 70.00th=[14091], 80.00th=[16450], 90.00th=[21890], 95.00th=[30802], 00:14:12.081 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:14:12.081 | 99.99th=[50594] 00:14:12.081 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:14:12.081 slat (usec): min=4, max=13263, avg=114.70, stdev=595.74 00:14:12.081 clat (usec): min=1413, max=50587, avg=16786.93, stdev=7435.99 00:14:12.081 lat (usec): min=1425, max=50594, avg=16901.63, stdev=7488.21 00:14:12.081 clat percentiles (usec): 00:14:12.081 | 1.00th=[ 4490], 5.00th=[ 7242], 10.00th=[ 8848], 20.00th=[10159], 00:14:12.081 | 30.00th=[10814], 40.00th=[11863], 50.00th=[14484], 60.00th=[21103], 00:14:12.081 | 70.00th=[22414], 80.00th=[23462], 90.00th=[26608], 95.00th=[29754], 00:14:12.081 | 99.00th=[31065], 99.50th=[33162], 99.90th=[35390], 99.95th=[42206], 00:14:12.081 | 99.99th=[50594] 00:14:12.081 bw ( KiB/s): min=15440, max=17328, per=24.52%, avg=16384.00, stdev=1335.02, samples=2 00:14:12.081 iops : min= 3860, max= 4332, avg=4096.00, stdev=333.75, samples=2 00:14:12.081 lat (msec) : 2=0.05%, 4=0.25%, 10=14.96%, 20=57.17%, 50=27.48% 00:14:12.081 lat (msec) : 100=0.09% 00:14:12.081 cpu : usr=4.95%, sys=9.01%, ctx=422, majf=0, minf=1 00:14:12.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:12.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:12.082 issued rwts: total=4058,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:12.082 job1: (groupid=0, jobs=1): err= 0: pid=3770014: Mon Jul 15 23:40:46 2024 00:14:12.082 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1010msec) 00:14:12.082 slat (usec): min=2, max=18048, avg=98.83, stdev=708.66 00:14:12.082 clat (usec): min=4488, max=34565, avg=12681.06, stdev=4321.04 00:14:12.082 lat (usec): min=4509, max=36358, avg=12779.89, stdev=4362.74 00:14:12.082 clat percentiles (usec): 00:14:12.082 | 1.00th=[ 5932], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:14:12.082 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[12125], 00:14:12.082 | 70.00th=[12911], 80.00th=[15008], 90.00th=[18220], 95.00th=[20055], 00:14:12.082 | 99.00th=[30802], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:14:12.082 | 99.99th=[34341] 00:14:12.082 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:14:12.082 slat (usec): min=3, max=9454, avg=74.14, stdev=366.12 00:14:12.082 clat (usec): min=979, max=42590, avg=11082.80, stdev=3443.92 00:14:12.082 lat (usec): min=987, max=42595, avg=11156.94, stdev=3464.19 00:14:12.082 clat percentiles (usec): 00:14:12.082 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 7439], 20.00th=[ 9241], 00:14:12.082 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:14:12.082 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12649], 95.00th=[16450], 00:14:12.082 | 99.00th=[26346], 99.50th=[27395], 99.90th=[27657], 99.95th=[32375], 00:14:12.082 | 99.99th=[42730] 00:14:12.082 bw ( KiB/s): min=20480, max=23800, per=33.13%, avg=22140.00, stdev=2347.59, samples=2 00:14:12.082 iops : min= 5120, max= 5950, avg=5535.00, stdev=586.90, samples=2 00:14:12.082 lat (usec) : 1000=0.03% 00:14:12.082 lat (msec) : 2=0.01%, 4=0.28%, 10=19.72%, 20=75.65%, 50=4.31% 00:14:12.082 cpu : usr=6.44%, sys=11.20%, ctx=632, majf=0, minf=1 00:14:12.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:12.082 issued rwts: total=5150,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:12.082 job2: (groupid=0, jobs=1): err= 0: pid=3770015: Mon Jul 15 23:40:46 2024 00:14:12.082 read: IOPS=4114, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1007msec) 00:14:12.082 slat (usec): min=2, max=6848, avg=109.10, stdev=588.59 00:14:12.082 clat (usec): min=5693, max=26930, avg=14108.88, stdev=2628.84 00:14:12.082 lat (usec): min=6578, max=32474, avg=14217.99, stdev=2666.69 00:14:12.082 clat percentiles (usec): 00:14:12.082 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11207], 20.00th=[12256], 00:14:12.082 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14353], 00:14:12.082 | 70.00th=[14746], 80.00th=[15795], 90.00th=[16909], 95.00th=[18482], 00:14:12.082 | 99.00th=[24511], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:14:12.082 | 99.99th=[26870] 00:14:12.082 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:14:12.082 slat (usec): min=4, max=13951, avg=109.53, stdev=694.69 00:14:12.082 clat (usec): min=971, max=39733, avg=15032.42, stdev=3580.56 00:14:12.082 lat (usec): min=979, max=39757, avg=15141.96, stdev=3647.99 00:14:12.082 clat percentiles (usec): 00:14:12.082 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[12125], 20.00th=[13304], 00:14:12.082 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:14:12.082 | 70.00th=[14877], 80.00th=[15533], 90.00th=[19268], 95.00th=[25297], 00:14:12.082 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30802], 99.95th=[38011], 00:14:12.082 | 99.99th=[39584] 00:14:12.082 bw ( KiB/s): min=17208, max=19016, per=27.10%, avg=18112.00, stdev=1278.45, samples=2 00:14:12.082 iops : min= 4302, max= 4754, avg=4528.00, stdev=319.61, samples=2 00:14:12.082 lat (usec) : 1000=0.05% 00:14:12.082 lat (msec) : 10=2.41%, 20=92.15%, 50=5.39% 00:14:12.082 cpu : usr=5.57%, sys=8.25%, ctx=439, majf=0, minf=1 00:14:12.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:12.082 issued rwts: total=4143,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:12.082 job3: (groupid=0, jobs=1): err= 0: pid=3770016: Mon Jul 15 23:40:46 2024 00:14:12.082 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:14:12.082 slat (usec): min=2, max=28959, avg=204.16, stdev=1372.17 00:14:12.082 clat (usec): min=5507, max=89904, avg=25310.86, stdev=11871.75 00:14:12.082 lat (usec): min=5510, max=97002, avg=25515.02, stdev=11972.38 00:14:12.082 clat percentiles (usec): 00:14:12.082 | 1.00th=[ 5538], 5.00th=[16909], 10.00th=[17433], 20.00th=[17957], 00:14:12.082 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20055], 60.00th=[23725], 00:14:12.082 | 70.00th=[25822], 80.00th=[27919], 90.00th=[38536], 95.00th=[52691], 00:14:12.082 | 99.00th=[66323], 99.50th=[77071], 99.90th=[89654], 99.95th=[89654], 00:14:12.082 | 99.99th=[89654] 00:14:12.082 write: IOPS=2526, BW=9.87MiB/s (10.3MB/s)(9.98MiB/1011msec); 0 zone resets 00:14:12.082 slat (usec): min=3, max=25609, avg=205.75, stdev=1130.72 00:14:12.082 clat (msec): min=3, max=106, avg=29.53, stdev=18.36 00:14:12.082 lat (msec): min=3, max=106, avg=29.74, stdev=18.47 00:14:12.082 clat percentiles (msec): 00:14:12.082 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 21], 00:14:12.082 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 24], 00:14:12.082 | 70.00th=[ 28], 80.00th=[ 41], 90.00th=[ 52], 95.00th=[ 59], 00:14:12.082 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 00:14:12.082 | 99.99th=[ 107] 00:14:12.082 bw ( KiB/s): min= 8472, max=10936, per=14.52%, avg=9704.00, stdev=1742.31, samples=2 00:14:12.082 iops : min= 2118, max= 2734, avg=2426.00, stdev=435.58, samples=2 00:14:12.082 lat (msec) : 4=0.28%, 10=2.56%, 20=29.29%, 50=58.89%, 100=8.34% 00:14:12.082 lat (msec) : 250=0.63% 00:14:12.082 cpu : usr=2.57%, sys=5.54%, ctx=288, majf=0, minf=1 00:14:12.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:14:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:12.082 issued rwts: total=2048,2554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:12.082 00:14:12.082 Run status group 0 (all jobs): 00:14:12.082 READ: bw=59.5MiB/s (62.4MB/s), 8103KiB/s-19.9MiB/s (8297kB/s-20.9MB/s), io=60.2MiB (63.1MB), run=1007-1011msec 00:14:12.082 WRITE: bw=65.3MiB/s (68.4MB/s), 9.87MiB/s-21.8MiB/s (10.3MB/s-22.8MB/s), io=66.0MiB (69.2MB), run=1007-1011msec 00:14:12.082 00:14:12.082 Disk stats (read/write): 00:14:12.082 nvme0n1: ios=3366/3584, merge=0/0, ticks=48004/55056, in_queue=103060, util=100.00% 00:14:12.082 nvme0n2: ios=4350/4608, merge=0/0, ticks=47883/45070, in_queue=92953, util=87.82% 00:14:12.082 nvme0n3: ios=3641/3823, merge=0/0, ticks=24765/28631, in_queue=53396, util=90.93% 00:14:12.082 nvme0n4: ios=1919/2048, merge=0/0, ticks=20084/27042, in_queue=47126, util=100.00% 00:14:12.082 23:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:12.082 23:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3770154 00:14:12.082 23:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:12.082 23:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:12.082 [global] 00:14:12.082 thread=1 00:14:12.082 invalidate=1 00:14:12.082 rw=read 00:14:12.082 time_based=1 00:14:12.082 runtime=10 00:14:12.082 ioengine=libaio 00:14:12.082 direct=1 00:14:12.082 bs=4096 00:14:12.082 iodepth=1 00:14:12.082 norandommap=1 00:14:12.082 numjobs=1 00:14:12.082 00:14:12.082 [job0] 00:14:12.082 filename=/dev/nvme0n1 00:14:12.082 [job1] 00:14:12.082 filename=/dev/nvme0n2 00:14:12.082 [job2] 00:14:12.082 filename=/dev/nvme0n3 00:14:12.082 [job3] 00:14:12.082 filename=/dev/nvme0n4 00:14:12.082 Could not set queue depth (nvme0n1) 00:14:12.082 Could not set queue depth (nvme0n2) 00:14:12.082 Could not set queue depth (nvme0n3) 00:14:12.082 Could not set queue depth (nvme0n4) 00:14:12.082 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.082 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.082 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.082 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.082 fio-3.35 00:14:12.082 Starting 4 threads 00:14:15.360 23:40:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:15.360 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:15.360 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18939904, buflen=4096 00:14:15.360 fio: pid=3770369, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:15.360 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:15.360 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:15.360 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9011200, buflen=4096 00:14:15.360 fio: pid=3770360, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:15.618 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:15.618 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:15.618 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=532480, buflen=4096 00:14:15.618 fio: pid=3770308, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:15.876 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=368640, buflen=4096 00:14:15.876 fio: pid=3770322, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:14:15.876 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:15.876 23:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:15.876 00:14:15.876 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3770308: Mon Jul 15 23:40:50 2024 00:14:15.876 read: IOPS=38, BW=152KiB/s (156kB/s)(520KiB/3423msec) 00:14:15.876 slat (usec): min=5, max=27829, avg=300.27, stdev=2544.79 00:14:15.876 clat (usec): min=236, max=42254, avg=25845.32, stdev=19822.99 00:14:15.876 lat (usec): min=248, max=68976, avg=26147.63, stdev=20208.92 00:14:15.876 clat percentiles (usec): 00:14:15.876 | 1.00th=[ 245], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 375], 00:14:15.876 | 30.00th=[ 392], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:14:15.876 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:14:15.876 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:15.876 | 99.99th=[42206] 00:14:15.876 bw ( KiB/s): min= 96, max= 344, per=2.08%, avg=160.00, stdev=95.73, samples=6 00:14:15.876 iops : min= 24, max= 86, avg=40.00, stdev=23.93, samples=6 00:14:15.876 lat (usec) : 250=1.53%, 500=35.11% 00:14:15.876 lat (msec) : 10=0.76%, 50=61.83% 00:14:15.876 cpu : usr=0.00%, sys=0.15%, ctx=133, majf=0, minf=1 00:14:15.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.876 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.876 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.877 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3770322: Mon Jul 15 23:40:50 2024 00:14:15.877 read: IOPS=24, BW=98.1KiB/s (101kB/s)(360KiB/3668msec) 00:14:15.877 slat (usec): min=6, max=8877, avg=267.18, stdev=1365.78 00:14:15.877 clat (usec): min=245, max=42303, avg=40475.88, stdev=6107.55 00:14:15.877 lat (usec): min=259, max=49965, avg=40659.30, stdev=6225.47 00:14:15.877 clat percentiles (usec): 00:14:15.877 | 1.00th=[ 245], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:15.877 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:15.877 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:15.877 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:15.877 | 99.99th=[42206] 00:14:15.877 bw ( KiB/s): min= 96, max= 104, per=1.26%, avg=97.71, stdev= 3.15, samples=7 00:14:15.877 iops : min= 24, max= 26, avg=24.43, stdev= 0.79, samples=7 00:14:15.877 lat (usec) : 250=1.10%, 500=1.10% 00:14:15.877 lat (msec) : 50=96.70% 00:14:15.877 cpu : usr=0.00%, sys=0.27%, ctx=96, majf=0, minf=1 00:14:15.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.877 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3770360: Mon Jul 15 23:40:50 2024 00:14:15.877 read: IOPS=691, BW=2766KiB/s (2833kB/s)(8800KiB/3181msec) 00:14:15.877 slat (usec): min=5, max=6827, avg=13.21, stdev=145.43 00:14:15.877 clat (usec): min=201, max=42407, avg=1417.26, stdev=6846.43 00:14:15.877 lat (usec): min=207, max=48900, avg=1430.47, stdev=6868.47 00:14:15.877 clat percentiles (usec): 00:14:15.877 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:14:15.877 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:14:15.877 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:14:15.877 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:15.877 | 99.99th=[42206] 00:14:15.877 bw ( KiB/s): min= 104, max= 8680, per=38.12%, avg=2928.00, stdev=3926.93, samples=6 00:14:15.877 iops : min= 26, max= 2170, avg=732.00, stdev=981.73, samples=6 00:14:15.877 lat (usec) : 250=61.84%, 500=35.21%, 750=0.05% 00:14:15.877 lat (msec) : 50=2.86% 00:14:15.877 cpu : usr=0.60%, sys=0.94%, ctx=2202, majf=0, minf=1 00:14:15.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.877 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3770369: Mon Jul 15 23:40:50 2024 00:14:15.877 read: IOPS=1600, BW=6400KiB/s (6554kB/s)(18.1MiB/2890msec) 00:14:15.877 slat (nsec): min=4513, max=74652, avg=12169.47, stdev=6327.57 00:14:15.877 clat (usec): min=203, max=42214, avg=604.79, stdev=3727.10 00:14:15.877 lat (usec): min=208, max=42231, avg=616.96, stdev=3727.83 00:14:15.877 clat percentiles (usec): 00:14:15.877 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:14:15.877 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 269], 00:14:15.877 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:14:15.877 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:15.877 | 99.99th=[42206] 00:14:15.877 bw ( KiB/s): min= 112, max=13864, per=96.13%, avg=7384.00, stdev=5774.52, samples=5 00:14:15.877 iops : min= 28, max= 3466, avg=1846.00, stdev=1443.63, samples=5 00:14:15.877 lat (usec) : 250=45.77%, 500=52.41%, 750=0.97% 00:14:15.877 lat (msec) : 50=0.82% 00:14:15.877 cpu : usr=1.21%, sys=2.80%, ctx=4626, majf=0, minf=1 00:14:15.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.877 issued rwts: total=4625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.877 00:14:15.877 Run status group 0 (all jobs): 00:14:15.877 READ: bw=7682KiB/s (7866kB/s), 98.1KiB/s-6400KiB/s (101kB/s-6554kB/s), io=27.5MiB (28.9MB), run=2890-3668msec 00:14:15.877 00:14:15.877 Disk stats (read/write): 00:14:15.877 nvme0n1: ios=128/0, merge=0/0, ticks=3276/0, in_queue=3276, util=95.02% 00:14:15.877 nvme0n2: ios=88/0, merge=0/0, ticks=3563/0, in_queue=3563, util=96.11% 00:14:15.877 nvme0n3: ios=2198/0, merge=0/0, ticks=2990/0, in_queue=2990, util=96.54% 00:14:15.877 nvme0n4: ios=4623/0, merge=0/0, ticks=2722/0, in_queue=2722, util=96.71% 00:14:16.134 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.134 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:16.391 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.391 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:16.647 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.647 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:16.903 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:16.903 23:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:17.161 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:17.161 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3770154 00:14:17.161 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:17.161 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:17.418 nvmf hotplug test: fio failed as expected 00:14:17.418 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.676 rmmod nvme_tcp 00:14:17.676 rmmod nvme_fabrics 00:14:17.676 rmmod nvme_keyring 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3768250 ']' 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3768250 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3768250 ']' 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3768250 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3768250 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3768250' 00:14:17.676 killing process with pid 3768250 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3768250 00:14:17.676 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3768250 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.936 23:40:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.837 23:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.837 00:14:19.837 real 0m23.314s 00:14:19.837 user 1m21.526s 00:14:19.837 sys 0m6.187s 00:14:19.837 23:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.837 23:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.837 ************************************ 00:14:19.837 END TEST nvmf_fio_target 00:14:19.837 ************************************ 00:14:20.096 23:40:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.096 23:40:54 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:20.096 23:40:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.096 23:40:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.096 23:40:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.096 ************************************ 00:14:20.096 START TEST nvmf_bdevio 00:14:20.096 ************************************ 00:14:20.096 23:40:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:20.096 * Looking for test storage... 00:14:20.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.096 23:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:22.630 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:22.630 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:22.630 Found net devices under 0000:09:00.0: cvl_0_0 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.630 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:22.630 Found net devices under 0000:09:00.1: cvl_0_1 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:22.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:14:22.631 00:14:22.631 --- 10.0.0.2 ping statistics --- 00:14:22.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.631 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:14:22.631 00:14:22.631 --- 10.0.0.1 ping statistics --- 00:14:22.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.631 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3772986 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3772986 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3772986 ']' 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 [2024-07-15 23:40:57.407124] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:14:22.631 [2024-07-15 23:40:57.407203] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.631 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.631 [2024-07-15 23:40:57.469866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.631 [2024-07-15 23:40:57.578816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.631 [2024-07-15 23:40:57.578863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.631 [2024-07-15 23:40:57.578887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.631 [2024-07-15 23:40:57.578898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.631 [2024-07-15 23:40:57.578908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.631 [2024-07-15 23:40:57.578996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:22.631 [2024-07-15 23:40:57.579062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:22.631 [2024-07-15 23:40:57.579127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:22.631 [2024-07-15 23:40:57.579130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 [2024-07-15 23:40:57.717554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 Malloc0 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.631 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:22.890 [2024-07-15 23:40:57.768044] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:22.890 { 00:14:22.890 "params": { 00:14:22.890 "name": "Nvme$subsystem", 00:14:22.890 "trtype": "$TEST_TRANSPORT", 00:14:22.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:22.890 "adrfam": "ipv4", 00:14:22.890 "trsvcid": "$NVMF_PORT", 00:14:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:22.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:22.890 "hdgst": ${hdgst:-false}, 00:14:22.890 "ddgst": ${ddgst:-false} 00:14:22.890 }, 00:14:22.890 "method": "bdev_nvme_attach_controller" 00:14:22.890 } 00:14:22.890 EOF 00:14:22.890 )") 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:22.890 23:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:22.890 "params": { 00:14:22.890 "name": "Nvme1", 00:14:22.890 "trtype": "tcp", 00:14:22.890 "traddr": "10.0.0.2", 00:14:22.890 "adrfam": "ipv4", 00:14:22.890 "trsvcid": "4420", 00:14:22.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.890 "hdgst": false, 00:14:22.890 "ddgst": false 00:14:22.890 }, 00:14:22.890 "method": "bdev_nvme_attach_controller" 00:14:22.890 }' 00:14:22.890 [2024-07-15 23:40:57.810542] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:14:22.891 [2024-07-15 23:40:57.810618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773016 ] 00:14:22.891 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.891 [2024-07-15 23:40:57.870622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:22.891 [2024-07-15 23:40:57.986535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.891 [2024-07-15 23:40:57.986585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.891 [2024-07-15 23:40:57.986588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.148 I/O targets: 00:14:23.148 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:23.148 00:14:23.148 00:14:23.148 CUnit - A unit testing framework for C - Version 2.1-3 00:14:23.148 http://cunit.sourceforge.net/ 00:14:23.148 00:14:23.148 00:14:23.148 Suite: bdevio tests on: Nvme1n1 00:14:23.148 Test: blockdev write read block ...passed 00:14:23.406 Test: blockdev write zeroes read block ...passed 00:14:23.406 Test: blockdev write zeroes read no split ...passed 00:14:23.406 Test: blockdev write zeroes read split ...passed 00:14:23.406 Test: blockdev write zeroes read split partial ...passed 00:14:23.406 Test: blockdev reset ...[2024-07-15 23:40:58.326989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:23.406 [2024-07-15 23:40:58.327103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f1580 (9): Bad file descriptor 00:14:23.406 [2024-07-15 23:40:58.420085] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:23.406 passed 00:14:23.406 Test: blockdev write read 8 blocks ...passed 00:14:23.406 Test: blockdev write read size > 128k ...passed 00:14:23.406 Test: blockdev write read invalid size ...passed 00:14:23.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:23.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:23.406 Test: blockdev write read max offset ...passed 00:14:23.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:23.663 Test: blockdev writev readv 8 blocks ...passed 00:14:23.663 Test: blockdev writev readv 30 x 1block ...passed 00:14:23.663 Test: blockdev writev readv block ...passed 00:14:23.663 Test: blockdev writev readv size > 128k ...passed 00:14:23.663 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:23.663 Test: blockdev comparev and writev ...[2024-07-15 23:40:58.635088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.635148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.635476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.635856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.635901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.635926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.636240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.636264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.636285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:23.663 [2024-07-15 23:40:58.636302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:23.663 passed 00:14:23.663 Test: blockdev nvme passthru rw ...passed 00:14:23.663 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:40:58.719199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:23.663 [2024-07-15 23:40:58.719227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.719375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:23.663 [2024-07-15 23:40:58.719399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.719536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:23.663 [2024-07-15 23:40:58.719560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:23.663 [2024-07-15 23:40:58.719710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:23.663 [2024-07-15 23:40:58.719734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:23.663 passed 00:14:23.663 Test: blockdev nvme admin passthru ...passed 00:14:23.663 Test: blockdev copy ...passed 00:14:23.663 00:14:23.663 Run Summary: Type Total Ran Passed Failed Inactive 00:14:23.663 suites 1 1 n/a 0 0 00:14:23.663 tests 23 23 23 0 0 00:14:23.663 asserts 152 152 152 0 n/a 00:14:23.663 00:14:23.663 Elapsed time = 1.137 seconds 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.920 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.920 rmmod nvme_tcp 00:14:23.920 rmmod nvme_fabrics 00:14:23.920 rmmod nvme_keyring 00:14:24.178 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.178 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:24.178 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:24.178 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3772986 ']' 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3772986 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3772986 ']' 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3772986 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3772986 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3772986' 00:14:24.179 killing process with pid 3772986 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3772986 00:14:24.179 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3772986 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.439 23:40:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.344 23:41:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.344 00:14:26.344 real 0m6.408s 00:14:26.344 user 0m9.925s 00:14:26.344 sys 0m2.125s 00:14:26.344 23:41:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.344 23:41:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.344 ************************************ 00:14:26.344 END TEST nvmf_bdevio 00:14:26.344 ************************************ 00:14:26.344 23:41:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.344 23:41:01 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:26.344 23:41:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.344 23:41:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.344 23:41:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.344 ************************************ 00:14:26.344 START TEST nvmf_auth_target 00:14:26.344 ************************************ 00:14:26.344 23:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:26.601 * Looking for test storage... 00:14:26.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.602 23:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:29.135 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:29.135 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:29.135 Found net devices under 0000:09:00.0: cvl_0_0 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.135 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:29.136 Found net devices under 0000:09:00.1: cvl_0_1 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:14:29.136 00:14:29.136 --- 10.0.0.2 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:14:29.136 00:14:29.136 --- 10.0.0.1 ping statistics --- 00:14:29.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.136 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3775086 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3775086 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3775086 ']' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.136 23:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3775137 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=72a83f117026585806b3067c60dfb111a4bec13f2274b0bf 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4jb 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 72a83f117026585806b3067c60dfb111a4bec13f2274b0bf 0 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 72a83f117026585806b3067c60dfb111a4bec13f2274b0bf 0 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=72a83f117026585806b3067c60dfb111a4bec13f2274b0bf 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4jb 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4jb 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.4jb 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=16d76ac801c90ad3a34c7b8ce112ea64c35534284065fc17810c52833eb2333f 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tat 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 16d76ac801c90ad3a34c7b8ce112ea64c35534284065fc17810c52833eb2333f 3 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 16d76ac801c90ad3a34c7b8ce112ea64c35534284065fc17810c52833eb2333f 3 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=16d76ac801c90ad3a34c7b8ce112ea64c35534284065fc17810c52833eb2333f 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:29.136 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tat 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tat 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.tat 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=27e79b2695ee22e4a2f757f78694adeb 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.n0s 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 27e79b2695ee22e4a2f757f78694adeb 1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 27e79b2695ee22e4a2f757f78694adeb 1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=27e79b2695ee22e4a2f757f78694adeb 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.n0s 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.n0s 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.n0s 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a18e09caf085877c80a8127ecd4f21fecdf71577563f6224 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BnB 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a18e09caf085877c80a8127ecd4f21fecdf71577563f6224 2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a18e09caf085877c80a8127ecd4f21fecdf71577563f6224 2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a18e09caf085877c80a8127ecd4f21fecdf71577563f6224 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BnB 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BnB 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.BnB 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0ed9d75988979c4cadb0cff45790c4bed5b88b906aaa1888 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YR5 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0ed9d75988979c4cadb0cff45790c4bed5b88b906aaa1888 2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0ed9d75988979c4cadb0cff45790c4bed5b88b906aaa1888 2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0ed9d75988979c4cadb0cff45790c4bed5b88b906aaa1888 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YR5 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YR5 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YR5 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.445 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b16428af8b934a20d72b429501f8651c 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OXN 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b16428af8b934a20d72b429501f8651c 1 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b16428af8b934a20d72b429501f8651c 1 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b16428af8b934a20d72b429501f8651c 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OXN 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OXN 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.OXN 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3336b3df2a31cbb4d969686bdfc48b4ba4aeb286dd41eb5b08aefc143c729c13 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yn0 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3336b3df2a31cbb4d969686bdfc48b4ba4aeb286dd41eb5b08aefc143c729c13 3 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3336b3df2a31cbb4d969686bdfc48b4ba4aeb286dd41eb5b08aefc143c729c13 3 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3336b3df2a31cbb4d969686bdfc48b4ba4aeb286dd41eb5b08aefc143c729c13 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yn0 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yn0 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.yn0 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3775086 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3775086 ']' 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.446 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3775137 /var/tmp/host.sock 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3775137 ']' 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:29.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.704 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.961 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.961 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:29.961 23:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:29.961 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.961 23:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4jb 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4jb 00:14:29.961 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4jb 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.tat ]] 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tat 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tat 00:14:30.219 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tat 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.n0s 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.n0s 00:14:30.477 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.n0s 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.BnB ]] 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BnB 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BnB 00:14:30.735 23:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BnB 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YR5 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YR5 00:14:30.992 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YR5 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.OXN ]] 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OXN 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OXN 00:14:31.250 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OXN 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yn0 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yn0 00:14:31.508 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yn0 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:31.766 23:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.024 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.282 00:14:32.282 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.282 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.282 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.539 { 00:14:32.539 "cntlid": 1, 00:14:32.539 "qid": 0, 00:14:32.539 "state": "enabled", 00:14:32.539 "thread": "nvmf_tgt_poll_group_000", 00:14:32.539 "listen_address": { 00:14:32.539 "trtype": "TCP", 00:14:32.539 "adrfam": "IPv4", 00:14:32.539 "traddr": "10.0.0.2", 00:14:32.539 "trsvcid": "4420" 00:14:32.539 }, 00:14:32.539 "peer_address": { 00:14:32.539 "trtype": "TCP", 00:14:32.539 "adrfam": "IPv4", 00:14:32.539 "traddr": "10.0.0.1", 00:14:32.539 "trsvcid": "45066" 00:14:32.539 }, 00:14:32.539 "auth": { 00:14:32.539 "state": "completed", 00:14:32.539 "digest": "sha256", 00:14:32.539 "dhgroup": "null" 00:14:32.539 } 00:14:32.539 } 00:14:32.539 ]' 00:14:32.539 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.797 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.055 23:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.988 23:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.988 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.554 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.554 { 00:14:34.554 "cntlid": 3, 00:14:34.554 "qid": 0, 00:14:34.554 "state": "enabled", 00:14:34.554 "thread": "nvmf_tgt_poll_group_000", 00:14:34.554 "listen_address": { 00:14:34.554 "trtype": "TCP", 00:14:34.554 "adrfam": "IPv4", 00:14:34.554 "traddr": "10.0.0.2", 00:14:34.554 "trsvcid": "4420" 00:14:34.554 }, 00:14:34.554 "peer_address": { 00:14:34.554 "trtype": "TCP", 00:14:34.554 "adrfam": "IPv4", 00:14:34.554 "traddr": "10.0.0.1", 00:14:34.554 "trsvcid": "45102" 00:14:34.554 }, 00:14:34.554 "auth": { 00:14:34.554 "state": "completed", 00:14:34.554 "digest": "sha256", 00:14:34.554 "dhgroup": "null" 00:14:34.554 } 00:14:34.554 } 00:14:34.554 ]' 00:14:34.554 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.811 23:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.069 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.002 23:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.260 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.518 00:14:36.518 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.518 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.518 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.776 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.776 { 00:14:36.776 "cntlid": 5, 00:14:36.776 "qid": 0, 00:14:36.776 "state": "enabled", 00:14:36.776 "thread": "nvmf_tgt_poll_group_000", 00:14:36.776 "listen_address": { 00:14:36.776 "trtype": "TCP", 00:14:36.776 "adrfam": "IPv4", 00:14:36.776 "traddr": "10.0.0.2", 00:14:36.776 "trsvcid": "4420" 00:14:36.776 }, 00:14:36.776 "peer_address": { 00:14:36.776 "trtype": "TCP", 00:14:36.776 "adrfam": "IPv4", 00:14:36.776 "traddr": "10.0.0.1", 00:14:36.776 "trsvcid": "45138" 00:14:36.776 }, 00:14:36.776 "auth": { 00:14:36.776 "state": "completed", 00:14:36.776 "digest": "sha256", 00:14:36.776 "dhgroup": "null" 00:14:36.776 } 00:14:36.776 } 00:14:36.776 ]' 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.777 23:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.342 23:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:14:37.927 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.927 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:37.927 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.927 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.184 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.184 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.184 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.184 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.442 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.699 00:14:38.699 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.699 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.699 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.957 { 00:14:38.957 "cntlid": 7, 00:14:38.957 "qid": 0, 00:14:38.957 "state": "enabled", 00:14:38.957 "thread": "nvmf_tgt_poll_group_000", 00:14:38.957 "listen_address": { 00:14:38.957 "trtype": "TCP", 00:14:38.957 "adrfam": "IPv4", 00:14:38.957 "traddr": "10.0.0.2", 00:14:38.957 "trsvcid": "4420" 00:14:38.957 }, 00:14:38.957 "peer_address": { 00:14:38.957 "trtype": "TCP", 00:14:38.957 "adrfam": "IPv4", 00:14:38.957 "traddr": "10.0.0.1", 00:14:38.957 "trsvcid": "45168" 00:14:38.957 }, 00:14:38.957 "auth": { 00:14:38.957 "state": "completed", 00:14:38.957 "digest": "sha256", 00:14:38.957 "dhgroup": "null" 00:14:38.957 } 00:14:38.957 } 00:14:38.957 ]' 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.957 23:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.957 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:38.957 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.957 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.957 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.957 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.523 23:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.456 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.714 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.971 00:14:40.971 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.971 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.971 23:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.229 { 00:14:41.229 "cntlid": 9, 00:14:41.229 "qid": 0, 00:14:41.229 "state": "enabled", 00:14:41.229 "thread": "nvmf_tgt_poll_group_000", 00:14:41.229 "listen_address": { 00:14:41.229 "trtype": "TCP", 00:14:41.229 "adrfam": "IPv4", 00:14:41.229 "traddr": "10.0.0.2", 00:14:41.229 "trsvcid": "4420" 00:14:41.229 }, 00:14:41.229 "peer_address": { 00:14:41.229 "trtype": "TCP", 00:14:41.229 "adrfam": "IPv4", 00:14:41.229 "traddr": "10.0.0.1", 00:14:41.229 "trsvcid": "38760" 00:14:41.229 }, 00:14:41.229 "auth": { 00:14:41.229 "state": "completed", 00:14:41.229 "digest": "sha256", 00:14:41.229 "dhgroup": "ffdhe2048" 00:14:41.229 } 00:14:41.229 } 00:14:41.229 ]' 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.229 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.488 23:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.420 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.677 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.678 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.678 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.678 23:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.678 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.678 23:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.243 00:14:43.243 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.243 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.243 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.499 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.499 { 00:14:43.499 "cntlid": 11, 00:14:43.499 "qid": 0, 00:14:43.499 "state": "enabled", 00:14:43.499 "thread": "nvmf_tgt_poll_group_000", 00:14:43.499 "listen_address": { 00:14:43.499 "trtype": "TCP", 00:14:43.499 "adrfam": "IPv4", 00:14:43.499 "traddr": "10.0.0.2", 00:14:43.499 "trsvcid": "4420" 00:14:43.499 }, 00:14:43.499 "peer_address": { 00:14:43.499 "trtype": "TCP", 00:14:43.499 "adrfam": "IPv4", 00:14:43.499 "traddr": "10.0.0.1", 00:14:43.499 "trsvcid": "38784" 00:14:43.499 }, 00:14:43.499 "auth": { 00:14:43.499 "state": "completed", 00:14:43.499 "digest": "sha256", 00:14:43.499 "dhgroup": "ffdhe2048" 00:14:43.499 } 00:14:43.499 } 00:14:43.499 ]' 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.500 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.757 23:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:14:44.688 23:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.689 23:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.947 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.549 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.549 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.806 23:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.807 { 00:14:45.807 "cntlid": 13, 00:14:45.807 "qid": 0, 00:14:45.807 "state": "enabled", 00:14:45.807 "thread": "nvmf_tgt_poll_group_000", 00:14:45.807 "listen_address": { 00:14:45.807 "trtype": "TCP", 00:14:45.807 "adrfam": "IPv4", 00:14:45.807 "traddr": "10.0.0.2", 00:14:45.807 "trsvcid": "4420" 00:14:45.807 }, 00:14:45.807 "peer_address": { 00:14:45.807 "trtype": "TCP", 00:14:45.807 "adrfam": "IPv4", 00:14:45.807 "traddr": "10.0.0.1", 00:14:45.807 "trsvcid": "38814" 00:14:45.807 }, 00:14:45.807 "auth": { 00:14:45.807 "state": "completed", 00:14:45.807 "digest": "sha256", 00:14:45.807 "dhgroup": "ffdhe2048" 00:14:45.807 } 00:14:45.807 } 00:14:45.807 ]' 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.807 23:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.064 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.996 23:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.254 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.818 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.818 { 00:14:47.818 "cntlid": 15, 00:14:47.818 "qid": 0, 00:14:47.818 "state": "enabled", 00:14:47.818 "thread": "nvmf_tgt_poll_group_000", 00:14:47.818 "listen_address": { 00:14:47.818 "trtype": "TCP", 00:14:47.818 "adrfam": "IPv4", 00:14:47.818 "traddr": "10.0.0.2", 00:14:47.818 "trsvcid": "4420" 00:14:47.818 }, 00:14:47.818 "peer_address": { 00:14:47.818 "trtype": "TCP", 00:14:47.818 "adrfam": "IPv4", 00:14:47.818 "traddr": "10.0.0.1", 00:14:47.818 "trsvcid": "38842" 00:14:47.818 }, 00:14:47.818 "auth": { 00:14:47.818 "state": "completed", 00:14:47.818 "digest": "sha256", 00:14:47.818 "dhgroup": "ffdhe2048" 00:14:47.818 } 00:14:47.818 } 00:14:47.818 ]' 00:14:47.818 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.075 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.075 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.075 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.075 23:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.075 23:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.075 23:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.075 23:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.332 23:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:49.259 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.515 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.772 00:14:49.772 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.772 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.772 23:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.028 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.028 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.028 23:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.028 23:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.028 23:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.029 { 00:14:50.029 "cntlid": 17, 00:14:50.029 "qid": 0, 00:14:50.029 "state": "enabled", 00:14:50.029 "thread": "nvmf_tgt_poll_group_000", 00:14:50.029 "listen_address": { 00:14:50.029 "trtype": "TCP", 00:14:50.029 "adrfam": "IPv4", 00:14:50.029 "traddr": "10.0.0.2", 00:14:50.029 "trsvcid": "4420" 00:14:50.029 }, 00:14:50.029 "peer_address": { 00:14:50.029 "trtype": "TCP", 00:14:50.029 "adrfam": "IPv4", 00:14:50.029 "traddr": "10.0.0.1", 00:14:50.029 "trsvcid": "38860" 00:14:50.029 }, 00:14:50.029 "auth": { 00:14:50.029 "state": "completed", 00:14:50.029 "digest": "sha256", 00:14:50.029 "dhgroup": "ffdhe3072" 00:14:50.029 } 00:14:50.029 } 00:14:50.029 ]' 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.029 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.285 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.285 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.285 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.285 23:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.214 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.471 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.036 00:14:52.036 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.036 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.036 23:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.036 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.036 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.036 23:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.036 23:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.293 { 00:14:52.293 "cntlid": 19, 00:14:52.293 "qid": 0, 00:14:52.293 "state": "enabled", 00:14:52.293 "thread": "nvmf_tgt_poll_group_000", 00:14:52.293 "listen_address": { 00:14:52.293 "trtype": "TCP", 00:14:52.293 "adrfam": "IPv4", 00:14:52.293 "traddr": "10.0.0.2", 00:14:52.293 "trsvcid": "4420" 00:14:52.293 }, 00:14:52.293 "peer_address": { 00:14:52.293 "trtype": "TCP", 00:14:52.293 "adrfam": "IPv4", 00:14:52.293 "traddr": "10.0.0.1", 00:14:52.293 "trsvcid": "38816" 00:14:52.293 }, 00:14:52.293 "auth": { 00:14:52.293 "state": "completed", 00:14:52.293 "digest": "sha256", 00:14:52.293 "dhgroup": "ffdhe3072" 00:14:52.293 } 00:14:52.293 } 00:14:52.293 ]' 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.293 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.294 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.294 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.551 23:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.483 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.741 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.999 00:14:53.999 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.999 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.999 23:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.257 { 00:14:54.257 "cntlid": 21, 00:14:54.257 "qid": 0, 00:14:54.257 "state": "enabled", 00:14:54.257 "thread": "nvmf_tgt_poll_group_000", 00:14:54.257 "listen_address": { 00:14:54.257 "trtype": "TCP", 00:14:54.257 "adrfam": "IPv4", 00:14:54.257 "traddr": "10.0.0.2", 00:14:54.257 "trsvcid": "4420" 00:14:54.257 }, 00:14:54.257 "peer_address": { 00:14:54.257 "trtype": "TCP", 00:14:54.257 "adrfam": "IPv4", 00:14:54.257 "traddr": "10.0.0.1", 00:14:54.257 "trsvcid": "38836" 00:14:54.257 }, 00:14:54.257 "auth": { 00:14:54.257 "state": "completed", 00:14:54.257 "digest": "sha256", 00:14:54.257 "dhgroup": "ffdhe3072" 00:14:54.257 } 00:14:54.257 } 00:14:54.257 ]' 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.257 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.514 23:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.447 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.705 23:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.270 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.270 { 00:14:56.270 "cntlid": 23, 00:14:56.270 "qid": 0, 00:14:56.270 "state": "enabled", 00:14:56.270 "thread": "nvmf_tgt_poll_group_000", 00:14:56.270 "listen_address": { 00:14:56.270 "trtype": "TCP", 00:14:56.270 "adrfam": "IPv4", 00:14:56.270 "traddr": "10.0.0.2", 00:14:56.270 "trsvcid": "4420" 00:14:56.270 }, 00:14:56.270 "peer_address": { 00:14:56.270 "trtype": "TCP", 00:14:56.270 "adrfam": "IPv4", 00:14:56.270 "traddr": "10.0.0.1", 00:14:56.270 "trsvcid": "38862" 00:14:56.270 }, 00:14:56.270 "auth": { 00:14:56.270 "state": "completed", 00:14:56.270 "digest": "sha256", 00:14:56.270 "dhgroup": "ffdhe3072" 00:14:56.270 } 00:14:56.270 } 00:14:56.270 ]' 00:14:56.270 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.528 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.785 23:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.718 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.976 23:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.234 00:14:58.234 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.234 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.234 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.491 { 00:14:58.491 "cntlid": 25, 00:14:58.491 "qid": 0, 00:14:58.491 "state": "enabled", 00:14:58.491 "thread": "nvmf_tgt_poll_group_000", 00:14:58.491 "listen_address": { 00:14:58.491 "trtype": "TCP", 00:14:58.491 "adrfam": "IPv4", 00:14:58.491 "traddr": "10.0.0.2", 00:14:58.491 "trsvcid": "4420" 00:14:58.491 }, 00:14:58.491 "peer_address": { 00:14:58.491 "trtype": "TCP", 00:14:58.491 "adrfam": "IPv4", 00:14:58.491 "traddr": "10.0.0.1", 00:14:58.491 "trsvcid": "38880" 00:14:58.491 }, 00:14:58.491 "auth": { 00:14:58.491 "state": "completed", 00:14:58.491 "digest": "sha256", 00:14:58.491 "dhgroup": "ffdhe4096" 00:14:58.491 } 00:14:58.491 } 00:14:58.491 ]' 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.491 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.748 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.748 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.748 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.748 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.748 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.005 23:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.937 23:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.195 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.196 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.453 00:15:00.453 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.453 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.453 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.712 { 00:15:00.712 "cntlid": 27, 00:15:00.712 "qid": 0, 00:15:00.712 "state": "enabled", 00:15:00.712 "thread": "nvmf_tgt_poll_group_000", 00:15:00.712 "listen_address": { 00:15:00.712 "trtype": "TCP", 00:15:00.712 "adrfam": "IPv4", 00:15:00.712 "traddr": "10.0.0.2", 00:15:00.712 "trsvcid": "4420" 00:15:00.712 }, 00:15:00.712 "peer_address": { 00:15:00.712 "trtype": "TCP", 00:15:00.712 "adrfam": "IPv4", 00:15:00.712 "traddr": "10.0.0.1", 00:15:00.712 "trsvcid": "45780" 00:15:00.712 }, 00:15:00.712 "auth": { 00:15:00.712 "state": "completed", 00:15:00.712 "digest": "sha256", 00:15:00.712 "dhgroup": "ffdhe4096" 00:15:00.712 } 00:15:00.712 } 00:15:00.712 ]' 00:15:00.712 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.969 23:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.228 23:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.193 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.451 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.709 00:15:02.709 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.709 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.709 23:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.967 { 00:15:02.967 "cntlid": 29, 00:15:02.967 "qid": 0, 00:15:02.967 "state": "enabled", 00:15:02.967 "thread": "nvmf_tgt_poll_group_000", 00:15:02.967 "listen_address": { 00:15:02.967 "trtype": "TCP", 00:15:02.967 "adrfam": "IPv4", 00:15:02.967 "traddr": "10.0.0.2", 00:15:02.967 "trsvcid": "4420" 00:15:02.967 }, 00:15:02.967 "peer_address": { 00:15:02.967 "trtype": "TCP", 00:15:02.967 "adrfam": "IPv4", 00:15:02.967 "traddr": "10.0.0.1", 00:15:02.967 "trsvcid": "45816" 00:15:02.967 }, 00:15:02.967 "auth": { 00:15:02.967 "state": "completed", 00:15:02.967 "digest": "sha256", 00:15:02.967 "dhgroup": "ffdhe4096" 00:15:02.967 } 00:15:02.967 } 00:15:02.967 ]' 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.967 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.225 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.225 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.225 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.225 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.225 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.483 23:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.416 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.674 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.933 00:15:04.933 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.933 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.933 23:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.191 { 00:15:05.191 "cntlid": 31, 00:15:05.191 "qid": 0, 00:15:05.191 "state": "enabled", 00:15:05.191 "thread": "nvmf_tgt_poll_group_000", 00:15:05.191 "listen_address": { 00:15:05.191 "trtype": "TCP", 00:15:05.191 "adrfam": "IPv4", 00:15:05.191 "traddr": "10.0.0.2", 00:15:05.191 "trsvcid": "4420" 00:15:05.191 }, 00:15:05.191 "peer_address": { 00:15:05.191 "trtype": "TCP", 00:15:05.191 "adrfam": "IPv4", 00:15:05.191 "traddr": "10.0.0.1", 00:15:05.191 "trsvcid": "45854" 00:15:05.191 }, 00:15:05.191 "auth": { 00:15:05.191 "state": "completed", 00:15:05.191 "digest": "sha256", 00:15:05.191 "dhgroup": "ffdhe4096" 00:15:05.191 } 00:15:05.191 } 00:15:05.191 ]' 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.191 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.449 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.449 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.449 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.449 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.449 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.707 23:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.638 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.896 23:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.461 00:15:07.461 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.461 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.461 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.719 { 00:15:07.719 "cntlid": 33, 00:15:07.719 "qid": 0, 00:15:07.719 "state": "enabled", 00:15:07.719 "thread": "nvmf_tgt_poll_group_000", 00:15:07.719 "listen_address": { 00:15:07.719 "trtype": "TCP", 00:15:07.719 "adrfam": "IPv4", 00:15:07.719 "traddr": "10.0.0.2", 00:15:07.719 "trsvcid": "4420" 00:15:07.719 }, 00:15:07.719 "peer_address": { 00:15:07.719 "trtype": "TCP", 00:15:07.719 "adrfam": "IPv4", 00:15:07.719 "traddr": "10.0.0.1", 00:15:07.719 "trsvcid": "45892" 00:15:07.719 }, 00:15:07.719 "auth": { 00:15:07.719 "state": "completed", 00:15:07.719 "digest": "sha256", 00:15:07.719 "dhgroup": "ffdhe6144" 00:15:07.719 } 00:15:07.719 } 00:15:07.719 ]' 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.719 23:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.977 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.909 23:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.166 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.730 00:15:09.730 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.730 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.730 23:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.988 { 00:15:09.988 "cntlid": 35, 00:15:09.988 "qid": 0, 00:15:09.988 "state": "enabled", 00:15:09.988 "thread": "nvmf_tgt_poll_group_000", 00:15:09.988 "listen_address": { 00:15:09.988 "trtype": "TCP", 00:15:09.988 "adrfam": "IPv4", 00:15:09.988 "traddr": "10.0.0.2", 00:15:09.988 "trsvcid": "4420" 00:15:09.988 }, 00:15:09.988 "peer_address": { 00:15:09.988 "trtype": "TCP", 00:15:09.988 "adrfam": "IPv4", 00:15:09.988 "traddr": "10.0.0.1", 00:15:09.988 "trsvcid": "45924" 00:15:09.988 }, 00:15:09.988 "auth": { 00:15:09.988 "state": "completed", 00:15:09.988 "digest": "sha256", 00:15:09.988 "dhgroup": "ffdhe6144" 00:15:09.988 } 00:15:09.988 } 00:15:09.988 ]' 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.988 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.246 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.246 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.246 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.246 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.246 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.505 23:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.439 23:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.005 00:15:12.005 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.005 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.005 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.263 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.263 { 00:15:12.263 "cntlid": 37, 00:15:12.263 "qid": 0, 00:15:12.263 "state": "enabled", 00:15:12.263 "thread": "nvmf_tgt_poll_group_000", 00:15:12.263 "listen_address": { 00:15:12.263 "trtype": "TCP", 00:15:12.263 "adrfam": "IPv4", 00:15:12.263 "traddr": "10.0.0.2", 00:15:12.263 "trsvcid": "4420" 00:15:12.263 }, 00:15:12.263 "peer_address": { 00:15:12.263 "trtype": "TCP", 00:15:12.263 "adrfam": "IPv4", 00:15:12.263 "traddr": "10.0.0.1", 00:15:12.263 "trsvcid": "37582" 00:15:12.263 }, 00:15:12.263 "auth": { 00:15:12.263 "state": "completed", 00:15:12.263 "digest": "sha256", 00:15:12.264 "dhgroup": "ffdhe6144" 00:15:12.264 } 00:15:12.264 } 00:15:12.264 ]' 00:15:12.264 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.264 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.264 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.264 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.264 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.522 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.522 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.522 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.780 23:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.711 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.968 23:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.531 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.531 { 00:15:14.531 "cntlid": 39, 00:15:14.531 "qid": 0, 00:15:14.531 "state": "enabled", 00:15:14.531 "thread": "nvmf_tgt_poll_group_000", 00:15:14.531 "listen_address": { 00:15:14.531 "trtype": "TCP", 00:15:14.531 "adrfam": "IPv4", 00:15:14.531 "traddr": "10.0.0.2", 00:15:14.531 "trsvcid": "4420" 00:15:14.531 }, 00:15:14.531 "peer_address": { 00:15:14.531 "trtype": "TCP", 00:15:14.531 "adrfam": "IPv4", 00:15:14.531 "traddr": "10.0.0.1", 00:15:14.531 "trsvcid": "37612" 00:15:14.531 }, 00:15:14.531 "auth": { 00:15:14.531 "state": "completed", 00:15:14.531 "digest": "sha256", 00:15:14.531 "dhgroup": "ffdhe6144" 00:15:14.531 } 00:15:14.531 } 00:15:14.531 ]' 00:15:14.531 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.787 23:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.044 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:15.972 23:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.228 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.159 00:15:17.159 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.159 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.159 23:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.159 { 00:15:17.159 "cntlid": 41, 00:15:17.159 "qid": 0, 00:15:17.159 "state": "enabled", 00:15:17.159 "thread": "nvmf_tgt_poll_group_000", 00:15:17.159 "listen_address": { 00:15:17.159 "trtype": "TCP", 00:15:17.159 "adrfam": "IPv4", 00:15:17.159 "traddr": "10.0.0.2", 00:15:17.159 "trsvcid": "4420" 00:15:17.159 }, 00:15:17.159 "peer_address": { 00:15:17.159 "trtype": "TCP", 00:15:17.159 "adrfam": "IPv4", 00:15:17.159 "traddr": "10.0.0.1", 00:15:17.159 "trsvcid": "37642" 00:15:17.159 }, 00:15:17.159 "auth": { 00:15:17.159 "state": "completed", 00:15:17.159 "digest": "sha256", 00:15:17.159 "dhgroup": "ffdhe8192" 00:15:17.159 } 00:15:17.159 } 00:15:17.159 ]' 00:15:17.159 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.417 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.675 23:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.636 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.894 23:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.458 00:15:19.458 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.458 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.458 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.716 { 00:15:19.716 "cntlid": 43, 00:15:19.716 "qid": 0, 00:15:19.716 "state": "enabled", 00:15:19.716 "thread": "nvmf_tgt_poll_group_000", 00:15:19.716 "listen_address": { 00:15:19.716 "trtype": "TCP", 00:15:19.716 "adrfam": "IPv4", 00:15:19.716 "traddr": "10.0.0.2", 00:15:19.716 "trsvcid": "4420" 00:15:19.716 }, 00:15:19.716 "peer_address": { 00:15:19.716 "trtype": "TCP", 00:15:19.716 "adrfam": "IPv4", 00:15:19.716 "traddr": "10.0.0.1", 00:15:19.716 "trsvcid": "37674" 00:15:19.716 }, 00:15:19.716 "auth": { 00:15:19.716 "state": "completed", 00:15:19.716 "digest": "sha256", 00:15:19.716 "dhgroup": "ffdhe8192" 00:15:19.716 } 00:15:19.716 } 00:15:19.716 ]' 00:15:19.716 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.974 23:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.232 23:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.163 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.420 23:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.353 00:15:22.353 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.353 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.353 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.610 { 00:15:22.610 "cntlid": 45, 00:15:22.610 "qid": 0, 00:15:22.610 "state": "enabled", 00:15:22.610 "thread": "nvmf_tgt_poll_group_000", 00:15:22.610 "listen_address": { 00:15:22.610 "trtype": "TCP", 00:15:22.610 "adrfam": "IPv4", 00:15:22.610 "traddr": "10.0.0.2", 00:15:22.610 "trsvcid": "4420" 00:15:22.610 }, 00:15:22.610 "peer_address": { 00:15:22.610 "trtype": "TCP", 00:15:22.610 "adrfam": "IPv4", 00:15:22.610 "traddr": "10.0.0.1", 00:15:22.610 "trsvcid": "57496" 00:15:22.610 }, 00:15:22.610 "auth": { 00:15:22.610 "state": "completed", 00:15:22.610 "digest": "sha256", 00:15:22.610 "dhgroup": "ffdhe8192" 00:15:22.610 } 00:15:22.610 } 00:15:22.610 ]' 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.610 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.868 23:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.799 23:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.056 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.057 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.989 00:15:24.989 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.989 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.989 23:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.247 { 00:15:25.247 "cntlid": 47, 00:15:25.247 "qid": 0, 00:15:25.247 "state": "enabled", 00:15:25.247 "thread": "nvmf_tgt_poll_group_000", 00:15:25.247 "listen_address": { 00:15:25.247 "trtype": "TCP", 00:15:25.247 "adrfam": "IPv4", 00:15:25.247 "traddr": "10.0.0.2", 00:15:25.247 "trsvcid": "4420" 00:15:25.247 }, 00:15:25.247 "peer_address": { 00:15:25.247 "trtype": "TCP", 00:15:25.247 "adrfam": "IPv4", 00:15:25.247 "traddr": "10.0.0.1", 00:15:25.247 "trsvcid": "57524" 00:15:25.247 }, 00:15:25.247 "auth": { 00:15:25.247 "state": "completed", 00:15:25.247 "digest": "sha256", 00:15:25.247 "dhgroup": "ffdhe8192" 00:15:25.247 } 00:15:25.247 } 00:15:25.247 ]' 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.247 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.504 23:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.452 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.709 23:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.710 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.710 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.966 00:15:26.966 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.966 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.966 23:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.223 { 00:15:27.223 "cntlid": 49, 00:15:27.223 "qid": 0, 00:15:27.223 "state": "enabled", 00:15:27.223 "thread": "nvmf_tgt_poll_group_000", 00:15:27.223 "listen_address": { 00:15:27.223 "trtype": "TCP", 00:15:27.223 "adrfam": "IPv4", 00:15:27.223 "traddr": "10.0.0.2", 00:15:27.223 "trsvcid": "4420" 00:15:27.223 }, 00:15:27.223 "peer_address": { 00:15:27.223 "trtype": "TCP", 00:15:27.223 "adrfam": "IPv4", 00:15:27.223 "traddr": "10.0.0.1", 00:15:27.223 "trsvcid": "57560" 00:15:27.223 }, 00:15:27.223 "auth": { 00:15:27.223 "state": "completed", 00:15:27.223 "digest": "sha384", 00:15:27.223 "dhgroup": "null" 00:15:27.223 } 00:15:27.223 } 00:15:27.223 ]' 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:27.223 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.480 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.480 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.480 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.737 23:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.668 23:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.232 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.232 { 00:15:29.232 "cntlid": 51, 00:15:29.232 "qid": 0, 00:15:29.232 "state": "enabled", 00:15:29.232 "thread": "nvmf_tgt_poll_group_000", 00:15:29.232 "listen_address": { 00:15:29.232 "trtype": "TCP", 00:15:29.232 "adrfam": "IPv4", 00:15:29.232 "traddr": "10.0.0.2", 00:15:29.232 "trsvcid": "4420" 00:15:29.232 }, 00:15:29.232 "peer_address": { 00:15:29.232 "trtype": "TCP", 00:15:29.232 "adrfam": "IPv4", 00:15:29.232 "traddr": "10.0.0.1", 00:15:29.232 "trsvcid": "57584" 00:15:29.232 }, 00:15:29.232 "auth": { 00:15:29.232 "state": "completed", 00:15:29.232 "digest": "sha384", 00:15:29.232 "dhgroup": "null" 00:15:29.232 } 00:15:29.232 } 00:15:29.232 ]' 00:15:29.232 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.489 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.746 23:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.678 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.936 23:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.194 00:15:31.194 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.194 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.194 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.451 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.452 { 00:15:31.452 "cntlid": 53, 00:15:31.452 "qid": 0, 00:15:31.452 "state": "enabled", 00:15:31.452 "thread": "nvmf_tgt_poll_group_000", 00:15:31.452 "listen_address": { 00:15:31.452 "trtype": "TCP", 00:15:31.452 "adrfam": "IPv4", 00:15:31.452 "traddr": "10.0.0.2", 00:15:31.452 "trsvcid": "4420" 00:15:31.452 }, 00:15:31.452 "peer_address": { 00:15:31.452 "trtype": "TCP", 00:15:31.452 "adrfam": "IPv4", 00:15:31.452 "traddr": "10.0.0.1", 00:15:31.452 "trsvcid": "34964" 00:15:31.452 }, 00:15:31.452 "auth": { 00:15:31.452 "state": "completed", 00:15:31.452 "digest": "sha384", 00:15:31.452 "dhgroup": "null" 00:15:31.452 } 00:15:31.452 } 00:15:31.452 ]' 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:31.452 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.709 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.709 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.709 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.966 23:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.897 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.898 23:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.155 00:15:33.155 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.155 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.155 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.414 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.414 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.414 23:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.414 23:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.670 { 00:15:33.670 "cntlid": 55, 00:15:33.670 "qid": 0, 00:15:33.670 "state": "enabled", 00:15:33.670 "thread": "nvmf_tgt_poll_group_000", 00:15:33.670 "listen_address": { 00:15:33.670 "trtype": "TCP", 00:15:33.670 "adrfam": "IPv4", 00:15:33.670 "traddr": "10.0.0.2", 00:15:33.670 "trsvcid": "4420" 00:15:33.670 }, 00:15:33.670 "peer_address": { 00:15:33.670 "trtype": "TCP", 00:15:33.670 "adrfam": "IPv4", 00:15:33.670 "traddr": "10.0.0.1", 00:15:33.670 "trsvcid": "34994" 00:15:33.670 }, 00:15:33.670 "auth": { 00:15:33.670 "state": "completed", 00:15:33.670 "digest": "sha384", 00:15:33.670 "dhgroup": "null" 00:15:33.670 } 00:15:33.670 } 00:15:33.670 ]' 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.670 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.926 23:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.892 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.150 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.407 00:15:35.407 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.407 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.407 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.665 { 00:15:35.665 "cntlid": 57, 00:15:35.665 "qid": 0, 00:15:35.665 "state": "enabled", 00:15:35.665 "thread": "nvmf_tgt_poll_group_000", 00:15:35.665 "listen_address": { 00:15:35.665 "trtype": "TCP", 00:15:35.665 "adrfam": "IPv4", 00:15:35.665 "traddr": "10.0.0.2", 00:15:35.665 "trsvcid": "4420" 00:15:35.665 }, 00:15:35.665 "peer_address": { 00:15:35.665 "trtype": "TCP", 00:15:35.665 "adrfam": "IPv4", 00:15:35.665 "traddr": "10.0.0.1", 00:15:35.665 "trsvcid": "35026" 00:15:35.665 }, 00:15:35.665 "auth": { 00:15:35.665 "state": "completed", 00:15:35.665 "digest": "sha384", 00:15:35.665 "dhgroup": "ffdhe2048" 00:15:35.665 } 00:15:35.665 } 00:15:35.665 ]' 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.665 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.922 23:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.855 23:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.112 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.370 00:15:37.370 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.370 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.370 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.627 { 00:15:37.627 "cntlid": 59, 00:15:37.627 "qid": 0, 00:15:37.627 "state": "enabled", 00:15:37.627 "thread": "nvmf_tgt_poll_group_000", 00:15:37.627 "listen_address": { 00:15:37.627 "trtype": "TCP", 00:15:37.627 "adrfam": "IPv4", 00:15:37.627 "traddr": "10.0.0.2", 00:15:37.627 "trsvcid": "4420" 00:15:37.627 }, 00:15:37.627 "peer_address": { 00:15:37.627 "trtype": "TCP", 00:15:37.627 "adrfam": "IPv4", 00:15:37.627 "traddr": "10.0.0.1", 00:15:37.627 "trsvcid": "35050" 00:15:37.627 }, 00:15:37.627 "auth": { 00:15:37.627 "state": "completed", 00:15:37.627 "digest": "sha384", 00:15:37.627 "dhgroup": "ffdhe2048" 00:15:37.627 } 00:15:37.627 } 00:15:37.627 ]' 00:15:37.627 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.885 23:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.142 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.071 23:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.329 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.586 00:15:39.586 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.586 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.586 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.843 { 00:15:39.843 "cntlid": 61, 00:15:39.843 "qid": 0, 00:15:39.843 "state": "enabled", 00:15:39.843 "thread": "nvmf_tgt_poll_group_000", 00:15:39.843 "listen_address": { 00:15:39.843 "trtype": "TCP", 00:15:39.843 "adrfam": "IPv4", 00:15:39.843 "traddr": "10.0.0.2", 00:15:39.843 "trsvcid": "4420" 00:15:39.843 }, 00:15:39.843 "peer_address": { 00:15:39.843 "trtype": "TCP", 00:15:39.843 "adrfam": "IPv4", 00:15:39.843 "traddr": "10.0.0.1", 00:15:39.843 "trsvcid": "35070" 00:15:39.843 }, 00:15:39.843 "auth": { 00:15:39.843 "state": "completed", 00:15:39.843 "digest": "sha384", 00:15:39.843 "dhgroup": "ffdhe2048" 00:15:39.843 } 00:15:39.843 } 00:15:39.843 ]' 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.843 23:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.101 23:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.034 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.292 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.549 00:15:41.549 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.549 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.549 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.806 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.806 { 00:15:41.806 "cntlid": 63, 00:15:41.806 "qid": 0, 00:15:41.806 "state": "enabled", 00:15:41.806 "thread": "nvmf_tgt_poll_group_000", 00:15:41.806 "listen_address": { 00:15:41.806 "trtype": "TCP", 00:15:41.806 "adrfam": "IPv4", 00:15:41.806 "traddr": "10.0.0.2", 00:15:41.806 "trsvcid": "4420" 00:15:41.806 }, 00:15:41.806 "peer_address": { 00:15:41.806 "trtype": "TCP", 00:15:41.806 "adrfam": "IPv4", 00:15:41.806 "traddr": "10.0.0.1", 00:15:41.806 "trsvcid": "52620" 00:15:41.806 }, 00:15:41.806 "auth": { 00:15:41.807 "state": "completed", 00:15:41.807 "digest": "sha384", 00:15:41.807 "dhgroup": "ffdhe2048" 00:15:41.807 } 00:15:41.807 } 00:15:41.807 ]' 00:15:41.807 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.064 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.064 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.064 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.064 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.064 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.064 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.064 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.322 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.255 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.513 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.771 00:15:43.771 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.771 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.771 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.029 { 00:15:44.029 "cntlid": 65, 00:15:44.029 "qid": 0, 00:15:44.029 "state": "enabled", 00:15:44.029 "thread": "nvmf_tgt_poll_group_000", 00:15:44.029 "listen_address": { 00:15:44.029 "trtype": "TCP", 00:15:44.029 "adrfam": "IPv4", 00:15:44.029 "traddr": "10.0.0.2", 00:15:44.029 "trsvcid": "4420" 00:15:44.029 }, 00:15:44.029 "peer_address": { 00:15:44.029 "trtype": "TCP", 00:15:44.029 "adrfam": "IPv4", 00:15:44.029 "traddr": "10.0.0.1", 00:15:44.029 "trsvcid": "52652" 00:15:44.029 }, 00:15:44.029 "auth": { 00:15:44.029 "state": "completed", 00:15:44.029 "digest": "sha384", 00:15:44.029 "dhgroup": "ffdhe3072" 00:15:44.029 } 00:15:44.029 } 00:15:44.029 ]' 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.029 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.286 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.286 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.286 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.286 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.286 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.543 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.477 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.735 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.993 00:15:45.993 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.993 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.993 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.251 { 00:15:46.251 "cntlid": 67, 00:15:46.251 "qid": 0, 00:15:46.251 "state": "enabled", 00:15:46.251 "thread": "nvmf_tgt_poll_group_000", 00:15:46.251 "listen_address": { 00:15:46.251 "trtype": "TCP", 00:15:46.251 "adrfam": "IPv4", 00:15:46.251 "traddr": "10.0.0.2", 00:15:46.251 "trsvcid": "4420" 00:15:46.251 }, 00:15:46.251 "peer_address": { 00:15:46.251 "trtype": "TCP", 00:15:46.251 "adrfam": "IPv4", 00:15:46.251 "traddr": "10.0.0.1", 00:15:46.251 "trsvcid": "52694" 00:15:46.251 }, 00:15:46.251 "auth": { 00:15:46.251 "state": "completed", 00:15:46.251 "digest": "sha384", 00:15:46.251 "dhgroup": "ffdhe3072" 00:15:46.251 } 00:15:46.251 } 00:15:46.251 ]' 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.251 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.508 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.442 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.700 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.266 00:15:48.266 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.266 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.266 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.524 { 00:15:48.524 "cntlid": 69, 00:15:48.524 "qid": 0, 00:15:48.524 "state": "enabled", 00:15:48.524 "thread": "nvmf_tgt_poll_group_000", 00:15:48.524 "listen_address": { 00:15:48.524 "trtype": "TCP", 00:15:48.524 "adrfam": "IPv4", 00:15:48.524 "traddr": "10.0.0.2", 00:15:48.524 "trsvcid": "4420" 00:15:48.524 }, 00:15:48.524 "peer_address": { 00:15:48.524 "trtype": "TCP", 00:15:48.524 "adrfam": "IPv4", 00:15:48.524 "traddr": "10.0.0.1", 00:15:48.524 "trsvcid": "52710" 00:15:48.524 }, 00:15:48.524 "auth": { 00:15:48.524 "state": "completed", 00:15:48.524 "digest": "sha384", 00:15:48.524 "dhgroup": "ffdhe3072" 00:15:48.524 } 00:15:48.524 } 00:15:48.524 ]' 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.524 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.781 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.714 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.715 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.984 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.264 00:15:50.264 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.264 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.264 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.522 { 00:15:50.522 "cntlid": 71, 00:15:50.522 "qid": 0, 00:15:50.522 "state": "enabled", 00:15:50.522 "thread": "nvmf_tgt_poll_group_000", 00:15:50.522 "listen_address": { 00:15:50.522 "trtype": "TCP", 00:15:50.522 "adrfam": "IPv4", 00:15:50.522 "traddr": "10.0.0.2", 00:15:50.522 "trsvcid": "4420" 00:15:50.522 }, 00:15:50.522 "peer_address": { 00:15:50.522 "trtype": "TCP", 00:15:50.522 "adrfam": "IPv4", 00:15:50.522 "traddr": "10.0.0.1", 00:15:50.522 "trsvcid": "52736" 00:15:50.522 }, 00:15:50.522 "auth": { 00:15:50.522 "state": "completed", 00:15:50.522 "digest": "sha384", 00:15:50.522 "dhgroup": "ffdhe3072" 00:15:50.522 } 00:15:50.522 } 00:15:50.522 ]' 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.522 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.780 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.713 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.971 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.536 00:15:52.536 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.536 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.536 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.794 { 00:15:52.794 "cntlid": 73, 00:15:52.794 "qid": 0, 00:15:52.794 "state": "enabled", 00:15:52.794 "thread": "nvmf_tgt_poll_group_000", 00:15:52.794 "listen_address": { 00:15:52.794 "trtype": "TCP", 00:15:52.794 "adrfam": "IPv4", 00:15:52.794 "traddr": "10.0.0.2", 00:15:52.794 "trsvcid": "4420" 00:15:52.794 }, 00:15:52.794 "peer_address": { 00:15:52.794 "trtype": "TCP", 00:15:52.794 "adrfam": "IPv4", 00:15:52.794 "traddr": "10.0.0.1", 00:15:52.794 "trsvcid": "52818" 00:15:52.794 }, 00:15:52.794 "auth": { 00:15:52.794 "state": "completed", 00:15:52.794 "digest": "sha384", 00:15:52.794 "dhgroup": "ffdhe4096" 00:15:52.794 } 00:15:52.794 } 00:15:52.794 ]' 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.794 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.052 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.985 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.242 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.498 00:15:54.754 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.754 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.754 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.754 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.754 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.010 { 00:15:55.010 "cntlid": 75, 00:15:55.010 "qid": 0, 00:15:55.010 "state": "enabled", 00:15:55.010 "thread": "nvmf_tgt_poll_group_000", 00:15:55.010 "listen_address": { 00:15:55.010 "trtype": "TCP", 00:15:55.010 "adrfam": "IPv4", 00:15:55.010 "traddr": "10.0.0.2", 00:15:55.010 "trsvcid": "4420" 00:15:55.010 }, 00:15:55.010 "peer_address": { 00:15:55.010 "trtype": "TCP", 00:15:55.010 "adrfam": "IPv4", 00:15:55.010 "traddr": "10.0.0.1", 00:15:55.010 "trsvcid": "52848" 00:15:55.010 }, 00:15:55.010 "auth": { 00:15:55.010 "state": "completed", 00:15:55.010 "digest": "sha384", 00:15:55.010 "dhgroup": "ffdhe4096" 00:15:55.010 } 00:15:55.010 } 00:15:55.010 ]' 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.010 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.268 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.198 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.455 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.712 00:15:56.712 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.712 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.712 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.967 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.967 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.967 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.967 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.967 23:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.967 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.967 { 00:15:56.967 "cntlid": 77, 00:15:56.967 "qid": 0, 00:15:56.967 "state": "enabled", 00:15:56.967 "thread": "nvmf_tgt_poll_group_000", 00:15:56.968 "listen_address": { 00:15:56.968 "trtype": "TCP", 00:15:56.968 "adrfam": "IPv4", 00:15:56.968 "traddr": "10.0.0.2", 00:15:56.968 "trsvcid": "4420" 00:15:56.968 }, 00:15:56.968 "peer_address": { 00:15:56.968 "trtype": "TCP", 00:15:56.968 "adrfam": "IPv4", 00:15:56.968 "traddr": "10.0.0.1", 00:15:56.968 "trsvcid": "52858" 00:15:56.968 }, 00:15:56.968 "auth": { 00:15:56.968 "state": "completed", 00:15:56.968 "digest": "sha384", 00:15:56.968 "dhgroup": "ffdhe4096" 00:15:56.968 } 00:15:56.968 } 00:15:56.968 ]' 00:15:56.968 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.968 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.968 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.968 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.968 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.223 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.223 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.223 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.480 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.411 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.668 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.925 00:15:58.925 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.925 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.925 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.182 { 00:15:59.182 "cntlid": 79, 00:15:59.182 "qid": 0, 00:15:59.182 "state": "enabled", 00:15:59.182 "thread": "nvmf_tgt_poll_group_000", 00:15:59.182 "listen_address": { 00:15:59.182 "trtype": "TCP", 00:15:59.182 "adrfam": "IPv4", 00:15:59.182 "traddr": "10.0.0.2", 00:15:59.182 "trsvcid": "4420" 00:15:59.182 }, 00:15:59.182 "peer_address": { 00:15:59.182 "trtype": "TCP", 00:15:59.182 "adrfam": "IPv4", 00:15:59.182 "traddr": "10.0.0.1", 00:15:59.182 "trsvcid": "52884" 00:15:59.182 }, 00:15:59.182 "auth": { 00:15:59.182 "state": "completed", 00:15:59.182 "digest": "sha384", 00:15:59.182 "dhgroup": "ffdhe4096" 00:15:59.182 } 00:15:59.182 } 00:15:59.182 ]' 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.182 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.440 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.440 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.440 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.440 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.440 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.697 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.628 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.629 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.629 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.629 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.629 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.629 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.194 00:16:01.194 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.194 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.194 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.451 { 00:16:01.451 "cntlid": 81, 00:16:01.451 "qid": 0, 00:16:01.451 "state": "enabled", 00:16:01.451 "thread": "nvmf_tgt_poll_group_000", 00:16:01.451 "listen_address": { 00:16:01.451 "trtype": "TCP", 00:16:01.451 "adrfam": "IPv4", 00:16:01.451 "traddr": "10.0.0.2", 00:16:01.451 "trsvcid": "4420" 00:16:01.451 }, 00:16:01.451 "peer_address": { 00:16:01.451 "trtype": "TCP", 00:16:01.451 "adrfam": "IPv4", 00:16:01.451 "traddr": "10.0.0.1", 00:16:01.451 "trsvcid": "57142" 00:16:01.451 }, 00:16:01.451 "auth": { 00:16:01.451 "state": "completed", 00:16:01.451 "digest": "sha384", 00:16:01.451 "dhgroup": "ffdhe6144" 00:16:01.451 } 00:16:01.451 } 00:16:01.451 ]' 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.451 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.709 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.709 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.709 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.709 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.709 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.967 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.901 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.158 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.723 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.723 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.981 { 00:16:03.981 "cntlid": 83, 00:16:03.981 "qid": 0, 00:16:03.981 "state": "enabled", 00:16:03.981 "thread": "nvmf_tgt_poll_group_000", 00:16:03.981 "listen_address": { 00:16:03.981 "trtype": "TCP", 00:16:03.981 "adrfam": "IPv4", 00:16:03.981 "traddr": "10.0.0.2", 00:16:03.981 "trsvcid": "4420" 00:16:03.981 }, 00:16:03.981 "peer_address": { 00:16:03.981 "trtype": "TCP", 00:16:03.981 "adrfam": "IPv4", 00:16:03.981 "traddr": "10.0.0.1", 00:16:03.981 "trsvcid": "57162" 00:16:03.981 }, 00:16:03.981 "auth": { 00:16:03.981 "state": "completed", 00:16:03.981 "digest": "sha384", 00:16:03.981 "dhgroup": "ffdhe6144" 00:16:03.981 } 00:16:03.981 } 00:16:03.981 ]' 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.981 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.239 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.178 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.441 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.005 00:16:06.005 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.005 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.005 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.270 { 00:16:06.270 "cntlid": 85, 00:16:06.270 "qid": 0, 00:16:06.270 "state": "enabled", 00:16:06.270 "thread": "nvmf_tgt_poll_group_000", 00:16:06.270 "listen_address": { 00:16:06.270 "trtype": "TCP", 00:16:06.270 "adrfam": "IPv4", 00:16:06.270 "traddr": "10.0.0.2", 00:16:06.270 "trsvcid": "4420" 00:16:06.270 }, 00:16:06.270 "peer_address": { 00:16:06.270 "trtype": "TCP", 00:16:06.270 "adrfam": "IPv4", 00:16:06.270 "traddr": "10.0.0.1", 00:16:06.270 "trsvcid": "57190" 00:16:06.270 }, 00:16:06.270 "auth": { 00:16:06.270 "state": "completed", 00:16:06.270 "digest": "sha384", 00:16:06.270 "dhgroup": "ffdhe6144" 00:16:06.270 } 00:16:06.270 } 00:16:06.270 ]' 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.270 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.548 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.524 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.525 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.782 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.346 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.346 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.603 { 00:16:08.603 "cntlid": 87, 00:16:08.603 "qid": 0, 00:16:08.603 "state": "enabled", 00:16:08.603 "thread": "nvmf_tgt_poll_group_000", 00:16:08.603 "listen_address": { 00:16:08.603 "trtype": "TCP", 00:16:08.603 "adrfam": "IPv4", 00:16:08.603 "traddr": "10.0.0.2", 00:16:08.603 "trsvcid": "4420" 00:16:08.603 }, 00:16:08.603 "peer_address": { 00:16:08.603 "trtype": "TCP", 00:16:08.603 "adrfam": "IPv4", 00:16:08.603 "traddr": "10.0.0.1", 00:16:08.603 "trsvcid": "57214" 00:16:08.603 }, 00:16:08.603 "auth": { 00:16:08.603 "state": "completed", 00:16:08.603 "digest": "sha384", 00:16:08.603 "dhgroup": "ffdhe6144" 00:16:08.603 } 00:16:08.603 } 00:16:08.603 ]' 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.603 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.861 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.794 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.052 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.052 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.986 00:16:10.986 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.986 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.986 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.244 { 00:16:11.244 "cntlid": 89, 00:16:11.244 "qid": 0, 00:16:11.244 "state": "enabled", 00:16:11.244 "thread": "nvmf_tgt_poll_group_000", 00:16:11.244 "listen_address": { 00:16:11.244 "trtype": "TCP", 00:16:11.244 "adrfam": "IPv4", 00:16:11.244 "traddr": "10.0.0.2", 00:16:11.244 "trsvcid": "4420" 00:16:11.244 }, 00:16:11.244 "peer_address": { 00:16:11.244 "trtype": "TCP", 00:16:11.244 "adrfam": "IPv4", 00:16:11.244 "traddr": "10.0.0.1", 00:16:11.244 "trsvcid": "36352" 00:16:11.244 }, 00:16:11.244 "auth": { 00:16:11.244 "state": "completed", 00:16:11.244 "digest": "sha384", 00:16:11.244 "dhgroup": "ffdhe8192" 00:16:11.244 } 00:16:11.244 } 00:16:11.244 ]' 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.244 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.502 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.436 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.693 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.694 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.626 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.627 { 00:16:13.627 "cntlid": 91, 00:16:13.627 "qid": 0, 00:16:13.627 "state": "enabled", 00:16:13.627 "thread": "nvmf_tgt_poll_group_000", 00:16:13.627 "listen_address": { 00:16:13.627 "trtype": "TCP", 00:16:13.627 "adrfam": "IPv4", 00:16:13.627 "traddr": "10.0.0.2", 00:16:13.627 "trsvcid": "4420" 00:16:13.627 }, 00:16:13.627 "peer_address": { 00:16:13.627 "trtype": "TCP", 00:16:13.627 "adrfam": "IPv4", 00:16:13.627 "traddr": "10.0.0.1", 00:16:13.627 "trsvcid": "36374" 00:16:13.627 }, 00:16:13.627 "auth": { 00:16:13.627 "state": "completed", 00:16:13.627 "digest": "sha384", 00:16:13.627 "dhgroup": "ffdhe8192" 00:16:13.627 } 00:16:13.627 } 00:16:13.627 ]' 00:16:13.627 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.885 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.141 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.072 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.329 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.262 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.262 { 00:16:16.262 "cntlid": 93, 00:16:16.262 "qid": 0, 00:16:16.262 "state": "enabled", 00:16:16.262 "thread": "nvmf_tgt_poll_group_000", 00:16:16.262 "listen_address": { 00:16:16.262 "trtype": "TCP", 00:16:16.262 "adrfam": "IPv4", 00:16:16.262 "traddr": "10.0.0.2", 00:16:16.262 "trsvcid": "4420" 00:16:16.262 }, 00:16:16.262 "peer_address": { 00:16:16.262 "trtype": "TCP", 00:16:16.262 "adrfam": "IPv4", 00:16:16.262 "traddr": "10.0.0.1", 00:16:16.262 "trsvcid": "36412" 00:16:16.262 }, 00:16:16.262 "auth": { 00:16:16.262 "state": "completed", 00:16:16.262 "digest": "sha384", 00:16:16.262 "dhgroup": "ffdhe8192" 00:16:16.262 } 00:16:16.262 } 00:16:16.262 ]' 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.262 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.518 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.518 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.518 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.775 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.705 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.639 00:16:18.639 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.639 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.639 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.894 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.894 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.895 { 00:16:18.895 "cntlid": 95, 00:16:18.895 "qid": 0, 00:16:18.895 "state": "enabled", 00:16:18.895 "thread": "nvmf_tgt_poll_group_000", 00:16:18.895 "listen_address": { 00:16:18.895 "trtype": "TCP", 00:16:18.895 "adrfam": "IPv4", 00:16:18.895 "traddr": "10.0.0.2", 00:16:18.895 "trsvcid": "4420" 00:16:18.895 }, 00:16:18.895 "peer_address": { 00:16:18.895 "trtype": "TCP", 00:16:18.895 "adrfam": "IPv4", 00:16:18.895 "traddr": "10.0.0.1", 00:16:18.895 "trsvcid": "36438" 00:16:18.895 }, 00:16:18.895 "auth": { 00:16:18.895 "state": "completed", 00:16:18.895 "digest": "sha384", 00:16:18.895 "dhgroup": "ffdhe8192" 00:16:18.895 } 00:16:18.895 } 00:16:18.895 ]' 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.895 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.151 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.151 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.151 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.152 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.152 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.408 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.905 00:16:20.905 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.905 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.905 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.905 { 00:16:20.905 "cntlid": 97, 00:16:20.905 "qid": 0, 00:16:20.905 "state": "enabled", 00:16:20.905 "thread": "nvmf_tgt_poll_group_000", 00:16:20.905 "listen_address": { 00:16:20.905 "trtype": "TCP", 00:16:20.905 "adrfam": "IPv4", 00:16:20.905 "traddr": "10.0.0.2", 00:16:20.905 "trsvcid": "4420" 00:16:20.905 }, 00:16:20.905 "peer_address": { 00:16:20.905 "trtype": "TCP", 00:16:20.905 "adrfam": "IPv4", 00:16:20.905 "traddr": "10.0.0.1", 00:16:20.905 "trsvcid": "46886" 00:16:20.905 }, 00:16:20.905 "auth": { 00:16:20.905 "state": "completed", 00:16:20.905 "digest": "sha512", 00:16:20.905 "dhgroup": "null" 00:16:20.905 } 00:16:20.905 } 00:16:20.905 ]' 00:16:20.905 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.162 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.419 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.353 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.611 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.869 00:16:22.869 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.869 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.869 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.127 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.127 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.127 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.128 { 00:16:23.128 "cntlid": 99, 00:16:23.128 "qid": 0, 00:16:23.128 "state": "enabled", 00:16:23.128 "thread": "nvmf_tgt_poll_group_000", 00:16:23.128 "listen_address": { 00:16:23.128 "trtype": "TCP", 00:16:23.128 "adrfam": "IPv4", 00:16:23.128 "traddr": "10.0.0.2", 00:16:23.128 "trsvcid": "4420" 00:16:23.128 }, 00:16:23.128 "peer_address": { 00:16:23.128 "trtype": "TCP", 00:16:23.128 "adrfam": "IPv4", 00:16:23.128 "traddr": "10.0.0.1", 00:16:23.128 "trsvcid": "46924" 00:16:23.128 }, 00:16:23.128 "auth": { 00:16:23.128 "state": "completed", 00:16:23.128 "digest": "sha512", 00:16:23.128 "dhgroup": "null" 00:16:23.128 } 00:16:23.128 } 00:16:23.128 ]' 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.128 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.386 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.318 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.576 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.141 00:16:25.141 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.141 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.141 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.141 { 00:16:25.141 "cntlid": 101, 00:16:25.141 "qid": 0, 00:16:25.141 "state": "enabled", 00:16:25.141 "thread": "nvmf_tgt_poll_group_000", 00:16:25.141 "listen_address": { 00:16:25.141 "trtype": "TCP", 00:16:25.141 "adrfam": "IPv4", 00:16:25.141 "traddr": "10.0.0.2", 00:16:25.141 "trsvcid": "4420" 00:16:25.141 }, 00:16:25.141 "peer_address": { 00:16:25.141 "trtype": "TCP", 00:16:25.141 "adrfam": "IPv4", 00:16:25.141 "traddr": "10.0.0.1", 00:16:25.141 "trsvcid": "46966" 00:16:25.141 }, 00:16:25.141 "auth": { 00:16:25.141 "state": "completed", 00:16:25.141 "digest": "sha512", 00:16:25.141 "dhgroup": "null" 00:16:25.141 } 00:16:25.141 } 00:16:25.141 ]' 00:16:25.141 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.398 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.398 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.399 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:25.399 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.399 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.399 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.399 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.656 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.589 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.847 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.104 00:16:27.104 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.104 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.104 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.362 { 00:16:27.362 "cntlid": 103, 00:16:27.362 "qid": 0, 00:16:27.362 "state": "enabled", 00:16:27.362 "thread": "nvmf_tgt_poll_group_000", 00:16:27.362 "listen_address": { 00:16:27.362 "trtype": "TCP", 00:16:27.362 "adrfam": "IPv4", 00:16:27.362 "traddr": "10.0.0.2", 00:16:27.362 "trsvcid": "4420" 00:16:27.362 }, 00:16:27.362 "peer_address": { 00:16:27.362 "trtype": "TCP", 00:16:27.362 "adrfam": "IPv4", 00:16:27.362 "traddr": "10.0.0.1", 00:16:27.362 "trsvcid": "47010" 00:16:27.362 }, 00:16:27.362 "auth": { 00:16:27.362 "state": "completed", 00:16:27.362 "digest": "sha512", 00:16:27.362 "dhgroup": "null" 00:16:27.362 } 00:16:27.362 } 00:16:27.362 ]' 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.362 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.619 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.619 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.619 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.619 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.619 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.875 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:28.805 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.806 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.370 00:16:29.370 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.370 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.370 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.627 { 00:16:29.627 "cntlid": 105, 00:16:29.627 "qid": 0, 00:16:29.627 "state": "enabled", 00:16:29.627 "thread": "nvmf_tgt_poll_group_000", 00:16:29.627 "listen_address": { 00:16:29.627 "trtype": "TCP", 00:16:29.627 "adrfam": "IPv4", 00:16:29.627 "traddr": "10.0.0.2", 00:16:29.627 "trsvcid": "4420" 00:16:29.627 }, 00:16:29.627 "peer_address": { 00:16:29.627 "trtype": "TCP", 00:16:29.627 "adrfam": "IPv4", 00:16:29.627 "traddr": "10.0.0.1", 00:16:29.627 "trsvcid": "47030" 00:16:29.627 }, 00:16:29.627 "auth": { 00:16:29.627 "state": "completed", 00:16:29.627 "digest": "sha512", 00:16:29.627 "dhgroup": "ffdhe2048" 00:16:29.627 } 00:16:29.627 } 00:16:29.627 ]' 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.627 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.884 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.815 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.072 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.636 00:16:31.636 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.636 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.636 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.893 { 00:16:31.893 "cntlid": 107, 00:16:31.893 "qid": 0, 00:16:31.893 "state": "enabled", 00:16:31.893 "thread": "nvmf_tgt_poll_group_000", 00:16:31.893 "listen_address": { 00:16:31.893 "trtype": "TCP", 00:16:31.893 "adrfam": "IPv4", 00:16:31.893 "traddr": "10.0.0.2", 00:16:31.893 "trsvcid": "4420" 00:16:31.893 }, 00:16:31.893 "peer_address": { 00:16:31.893 "trtype": "TCP", 00:16:31.893 "adrfam": "IPv4", 00:16:31.893 "traddr": "10.0.0.1", 00:16:31.893 "trsvcid": "50810" 00:16:31.893 }, 00:16:31.893 "auth": { 00:16:31.893 "state": "completed", 00:16:31.893 "digest": "sha512", 00:16:31.893 "dhgroup": "ffdhe2048" 00:16:31.893 } 00:16:31.893 } 00:16:31.893 ]' 00:16:31.893 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.894 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.151 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.082 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.340 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.597 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.871 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.162 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.162 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.162 { 00:16:34.162 "cntlid": 109, 00:16:34.162 "qid": 0, 00:16:34.162 "state": "enabled", 00:16:34.162 "thread": "nvmf_tgt_poll_group_000", 00:16:34.162 "listen_address": { 00:16:34.162 "trtype": "TCP", 00:16:34.162 "adrfam": "IPv4", 00:16:34.162 "traddr": "10.0.0.2", 00:16:34.162 "trsvcid": "4420" 00:16:34.162 }, 00:16:34.162 "peer_address": { 00:16:34.162 "trtype": "TCP", 00:16:34.162 "adrfam": "IPv4", 00:16:34.162 "traddr": "10.0.0.1", 00:16:34.162 "trsvcid": "50840" 00:16:34.162 }, 00:16:34.162 "auth": { 00:16:34.162 "state": "completed", 00:16:34.162 "digest": "sha512", 00:16:34.162 "dhgroup": "ffdhe2048" 00:16:34.162 } 00:16:34.162 } 00:16:34.162 ]' 00:16:34.162 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.162 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.162 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.162 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.162 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.163 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.163 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.163 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.425 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.357 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.613 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.870 00:16:35.870 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.870 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.870 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.126 { 00:16:36.126 "cntlid": 111, 00:16:36.126 "qid": 0, 00:16:36.126 "state": "enabled", 00:16:36.126 "thread": "nvmf_tgt_poll_group_000", 00:16:36.126 "listen_address": { 00:16:36.126 "trtype": "TCP", 00:16:36.126 "adrfam": "IPv4", 00:16:36.126 "traddr": "10.0.0.2", 00:16:36.126 "trsvcid": "4420" 00:16:36.126 }, 00:16:36.126 "peer_address": { 00:16:36.126 "trtype": "TCP", 00:16:36.126 "adrfam": "IPv4", 00:16:36.126 "traddr": "10.0.0.1", 00:16:36.126 "trsvcid": "50876" 00:16:36.126 }, 00:16:36.126 "auth": { 00:16:36.126 "state": "completed", 00:16:36.126 "digest": "sha512", 00:16:36.126 "dhgroup": "ffdhe2048" 00:16:36.126 } 00:16:36.126 } 00:16:36.126 ]' 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.126 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.382 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.382 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.382 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.382 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.382 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.639 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.570 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.827 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.084 00:16:38.084 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.084 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.084 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.340 { 00:16:38.340 "cntlid": 113, 00:16:38.340 "qid": 0, 00:16:38.340 "state": "enabled", 00:16:38.340 "thread": "nvmf_tgt_poll_group_000", 00:16:38.340 "listen_address": { 00:16:38.340 "trtype": "TCP", 00:16:38.340 "adrfam": "IPv4", 00:16:38.340 "traddr": "10.0.0.2", 00:16:38.340 "trsvcid": "4420" 00:16:38.340 }, 00:16:38.340 "peer_address": { 00:16:38.340 "trtype": "TCP", 00:16:38.340 "adrfam": "IPv4", 00:16:38.340 "traddr": "10.0.0.1", 00:16:38.340 "trsvcid": "50910" 00:16:38.340 }, 00:16:38.340 "auth": { 00:16:38.340 "state": "completed", 00:16:38.340 "digest": "sha512", 00:16:38.340 "dhgroup": "ffdhe3072" 00:16:38.340 } 00:16:38.340 } 00:16:38.340 ]' 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.340 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.596 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.527 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.784 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.041 00:16:40.041 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.041 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.041 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.298 { 00:16:40.298 "cntlid": 115, 00:16:40.298 "qid": 0, 00:16:40.298 "state": "enabled", 00:16:40.298 "thread": "nvmf_tgt_poll_group_000", 00:16:40.298 "listen_address": { 00:16:40.298 "trtype": "TCP", 00:16:40.298 "adrfam": "IPv4", 00:16:40.298 "traddr": "10.0.0.2", 00:16:40.298 "trsvcid": "4420" 00:16:40.298 }, 00:16:40.298 "peer_address": { 00:16:40.298 "trtype": "TCP", 00:16:40.298 "adrfam": "IPv4", 00:16:40.298 "traddr": "10.0.0.1", 00:16:40.298 "trsvcid": "50942" 00:16:40.298 }, 00:16:40.298 "auth": { 00:16:40.298 "state": "completed", 00:16:40.298 "digest": "sha512", 00:16:40.298 "dhgroup": "ffdhe3072" 00:16:40.298 } 00:16:40.298 } 00:16:40.298 ]' 00:16:40.298 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.556 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.814 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.747 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.005 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.263 00:16:42.263 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.263 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.263 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.521 { 00:16:42.521 "cntlid": 117, 00:16:42.521 "qid": 0, 00:16:42.521 "state": "enabled", 00:16:42.521 "thread": "nvmf_tgt_poll_group_000", 00:16:42.521 "listen_address": { 00:16:42.521 "trtype": "TCP", 00:16:42.521 "adrfam": "IPv4", 00:16:42.521 "traddr": "10.0.0.2", 00:16:42.521 "trsvcid": "4420" 00:16:42.521 }, 00:16:42.521 "peer_address": { 00:16:42.521 "trtype": "TCP", 00:16:42.521 "adrfam": "IPv4", 00:16:42.521 "traddr": "10.0.0.1", 00:16:42.521 "trsvcid": "59076" 00:16:42.521 }, 00:16:42.521 "auth": { 00:16:42.521 "state": "completed", 00:16:42.521 "digest": "sha512", 00:16:42.521 "dhgroup": "ffdhe3072" 00:16:42.521 } 00:16:42.521 } 00:16:42.521 ]' 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.521 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.779 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.779 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.779 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.037 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.969 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.227 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.485 00:16:44.485 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.485 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.485 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.743 { 00:16:44.743 "cntlid": 119, 00:16:44.743 "qid": 0, 00:16:44.743 "state": "enabled", 00:16:44.743 "thread": "nvmf_tgt_poll_group_000", 00:16:44.743 "listen_address": { 00:16:44.743 "trtype": "TCP", 00:16:44.743 "adrfam": "IPv4", 00:16:44.743 "traddr": "10.0.0.2", 00:16:44.743 "trsvcid": "4420" 00:16:44.743 }, 00:16:44.743 "peer_address": { 00:16:44.743 "trtype": "TCP", 00:16:44.743 "adrfam": "IPv4", 00:16:44.743 "traddr": "10.0.0.1", 00:16:44.743 "trsvcid": "59102" 00:16:44.743 }, 00:16:44.743 "auth": { 00:16:44.743 "state": "completed", 00:16:44.743 "digest": "sha512", 00:16:44.743 "dhgroup": "ffdhe3072" 00:16:44.743 } 00:16:44.743 } 00:16:44.743 ]' 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.743 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.002 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.002 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.002 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.264 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.196 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.454 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.711 00:16:46.711 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.711 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.711 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.968 { 00:16:46.968 "cntlid": 121, 00:16:46.968 "qid": 0, 00:16:46.968 "state": "enabled", 00:16:46.968 "thread": "nvmf_tgt_poll_group_000", 00:16:46.968 "listen_address": { 00:16:46.968 "trtype": "TCP", 00:16:46.968 "adrfam": "IPv4", 00:16:46.968 "traddr": "10.0.0.2", 00:16:46.968 "trsvcid": "4420" 00:16:46.968 }, 00:16:46.968 "peer_address": { 00:16:46.968 "trtype": "TCP", 00:16:46.968 "adrfam": "IPv4", 00:16:46.968 "traddr": "10.0.0.1", 00:16:46.968 "trsvcid": "59120" 00:16:46.968 }, 00:16:46.968 "auth": { 00:16:46.968 "state": "completed", 00:16:46.968 "digest": "sha512", 00:16:46.968 "dhgroup": "ffdhe4096" 00:16:46.968 } 00:16:46.968 } 00:16:46.968 ]' 00:16:46.968 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.225 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.539 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.499 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.756 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.012 00:16:49.012 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.012 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.012 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.269 { 00:16:49.269 "cntlid": 123, 00:16:49.269 "qid": 0, 00:16:49.269 "state": "enabled", 00:16:49.269 "thread": "nvmf_tgt_poll_group_000", 00:16:49.269 "listen_address": { 00:16:49.269 "trtype": "TCP", 00:16:49.269 "adrfam": "IPv4", 00:16:49.269 "traddr": "10.0.0.2", 00:16:49.269 "trsvcid": "4420" 00:16:49.269 }, 00:16:49.269 "peer_address": { 00:16:49.269 "trtype": "TCP", 00:16:49.269 "adrfam": "IPv4", 00:16:49.269 "traddr": "10.0.0.1", 00:16:49.269 "trsvcid": "59138" 00:16:49.269 }, 00:16:49.269 "auth": { 00:16:49.269 "state": "completed", 00:16:49.269 "digest": "sha512", 00:16:49.269 "dhgroup": "ffdhe4096" 00:16:49.269 } 00:16:49.269 } 00:16:49.269 ]' 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.269 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.526 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.526 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.526 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.783 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.716 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.282 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.282 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.539 { 00:16:51.539 "cntlid": 125, 00:16:51.539 "qid": 0, 00:16:51.539 "state": "enabled", 00:16:51.539 "thread": "nvmf_tgt_poll_group_000", 00:16:51.539 "listen_address": { 00:16:51.539 "trtype": "TCP", 00:16:51.539 "adrfam": "IPv4", 00:16:51.539 "traddr": "10.0.0.2", 00:16:51.539 "trsvcid": "4420" 00:16:51.539 }, 00:16:51.539 "peer_address": { 00:16:51.539 "trtype": "TCP", 00:16:51.539 "adrfam": "IPv4", 00:16:51.539 "traddr": "10.0.0.1", 00:16:51.539 "trsvcid": "51506" 00:16:51.539 }, 00:16:51.539 "auth": { 00:16:51.539 "state": "completed", 00:16:51.539 "digest": "sha512", 00:16:51.539 "dhgroup": "ffdhe4096" 00:16:51.539 } 00:16:51.539 } 00:16:51.539 ]' 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.539 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.796 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.728 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.985 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.986 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.986 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.243 00:16:53.243 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.243 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.243 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.501 { 00:16:53.501 "cntlid": 127, 00:16:53.501 "qid": 0, 00:16:53.501 "state": "enabled", 00:16:53.501 "thread": "nvmf_tgt_poll_group_000", 00:16:53.501 "listen_address": { 00:16:53.501 "trtype": "TCP", 00:16:53.501 "adrfam": "IPv4", 00:16:53.501 "traddr": "10.0.0.2", 00:16:53.501 "trsvcid": "4420" 00:16:53.501 }, 00:16:53.501 "peer_address": { 00:16:53.501 "trtype": "TCP", 00:16:53.501 "adrfam": "IPv4", 00:16:53.501 "traddr": "10.0.0.1", 00:16:53.501 "trsvcid": "51528" 00:16:53.501 }, 00:16:53.501 "auth": { 00:16:53.501 "state": "completed", 00:16:53.501 "digest": "sha512", 00:16:53.501 "dhgroup": "ffdhe4096" 00:16:53.501 } 00:16:53.501 } 00:16:53.501 ]' 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.501 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.759 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.759 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.759 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.759 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.692 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.951 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.517 00:16:55.517 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.517 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.517 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.775 { 00:16:55.775 "cntlid": 129, 00:16:55.775 "qid": 0, 00:16:55.775 "state": "enabled", 00:16:55.775 "thread": "nvmf_tgt_poll_group_000", 00:16:55.775 "listen_address": { 00:16:55.775 "trtype": "TCP", 00:16:55.775 "adrfam": "IPv4", 00:16:55.775 "traddr": "10.0.0.2", 00:16:55.775 "trsvcid": "4420" 00:16:55.775 }, 00:16:55.775 "peer_address": { 00:16:55.775 "trtype": "TCP", 00:16:55.775 "adrfam": "IPv4", 00:16:55.775 "traddr": "10.0.0.1", 00:16:55.775 "trsvcid": "51566" 00:16:55.775 }, 00:16:55.775 "auth": { 00:16:55.775 "state": "completed", 00:16:55.775 "digest": "sha512", 00:16:55.775 "dhgroup": "ffdhe6144" 00:16:55.775 } 00:16:55.775 } 00:16:55.775 ]' 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.775 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.033 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:16:56.967 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.225 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.790 00:16:57.790 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.790 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.790 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.047 { 00:16:58.047 "cntlid": 131, 00:16:58.047 "qid": 0, 00:16:58.047 "state": "enabled", 00:16:58.047 "thread": "nvmf_tgt_poll_group_000", 00:16:58.047 "listen_address": { 00:16:58.047 "trtype": "TCP", 00:16:58.047 "adrfam": "IPv4", 00:16:58.047 "traddr": "10.0.0.2", 00:16:58.047 "trsvcid": "4420" 00:16:58.047 }, 00:16:58.047 "peer_address": { 00:16:58.047 "trtype": "TCP", 00:16:58.047 "adrfam": "IPv4", 00:16:58.047 "traddr": "10.0.0.1", 00:16:58.047 "trsvcid": "51592" 00:16:58.047 }, 00:16:58.047 "auth": { 00:16:58.047 "state": "completed", 00:16:58.047 "digest": "sha512", 00:16:58.047 "dhgroup": "ffdhe6144" 00:16:58.047 } 00:16:58.047 } 00:16:58.047 ]' 00:16:58.047 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.308 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.569 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.499 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.754 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.316 00:17:00.316 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.316 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.316 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.573 { 00:17:00.573 "cntlid": 133, 00:17:00.573 "qid": 0, 00:17:00.573 "state": "enabled", 00:17:00.573 "thread": "nvmf_tgt_poll_group_000", 00:17:00.573 "listen_address": { 00:17:00.573 "trtype": "TCP", 00:17:00.573 "adrfam": "IPv4", 00:17:00.573 "traddr": "10.0.0.2", 00:17:00.573 "trsvcid": "4420" 00:17:00.573 }, 00:17:00.573 "peer_address": { 00:17:00.573 "trtype": "TCP", 00:17:00.573 "adrfam": "IPv4", 00:17:00.573 "traddr": "10.0.0.1", 00:17:00.573 "trsvcid": "51622" 00:17:00.573 }, 00:17:00.573 "auth": { 00:17:00.573 "state": "completed", 00:17:00.573 "digest": "sha512", 00:17:00.573 "dhgroup": "ffdhe6144" 00:17:00.573 } 00:17:00.573 } 00:17:00.573 ]' 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.573 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.830 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.830 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.830 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.086 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.016 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.272 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.834 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.834 { 00:17:02.834 "cntlid": 135, 00:17:02.834 "qid": 0, 00:17:02.834 "state": "enabled", 00:17:02.834 "thread": "nvmf_tgt_poll_group_000", 00:17:02.834 "listen_address": { 00:17:02.834 "trtype": "TCP", 00:17:02.834 "adrfam": "IPv4", 00:17:02.834 "traddr": "10.0.0.2", 00:17:02.834 "trsvcid": "4420" 00:17:02.834 }, 00:17:02.834 "peer_address": { 00:17:02.834 "trtype": "TCP", 00:17:02.834 "adrfam": "IPv4", 00:17:02.834 "traddr": "10.0.0.1", 00:17:02.834 "trsvcid": "54540" 00:17:02.834 }, 00:17:02.834 "auth": { 00:17:02.834 "state": "completed", 00:17:02.834 "digest": "sha512", 00:17:02.834 "dhgroup": "ffdhe6144" 00:17:02.834 } 00:17:02.834 } 00:17:02.834 ]' 00:17:02.834 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.109 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.109 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.109 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.109 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.109 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.109 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.109 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.366 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.297 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.555 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.486 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.486 { 00:17:05.486 "cntlid": 137, 00:17:05.486 "qid": 0, 00:17:05.486 "state": "enabled", 00:17:05.486 "thread": "nvmf_tgt_poll_group_000", 00:17:05.486 "listen_address": { 00:17:05.486 "trtype": "TCP", 00:17:05.486 "adrfam": "IPv4", 00:17:05.486 "traddr": "10.0.0.2", 00:17:05.486 "trsvcid": "4420" 00:17:05.486 }, 00:17:05.486 "peer_address": { 00:17:05.486 "trtype": "TCP", 00:17:05.486 "adrfam": "IPv4", 00:17:05.486 "traddr": "10.0.0.1", 00:17:05.486 "trsvcid": "54580" 00:17:05.486 }, 00:17:05.486 "auth": { 00:17:05.486 "state": "completed", 00:17:05.486 "digest": "sha512", 00:17:05.486 "dhgroup": "ffdhe8192" 00:17:05.486 } 00:17:05.486 } 00:17:05.486 ]' 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.486 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.743 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.743 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.743 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.743 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.743 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.999 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.950 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.227 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.160 00:17:08.160 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.160 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.160 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.160 { 00:17:08.160 "cntlid": 139, 00:17:08.160 "qid": 0, 00:17:08.160 "state": "enabled", 00:17:08.160 "thread": "nvmf_tgt_poll_group_000", 00:17:08.160 "listen_address": { 00:17:08.160 "trtype": "TCP", 00:17:08.160 "adrfam": "IPv4", 00:17:08.160 "traddr": "10.0.0.2", 00:17:08.160 "trsvcid": "4420" 00:17:08.160 }, 00:17:08.160 "peer_address": { 00:17:08.160 "trtype": "TCP", 00:17:08.160 "adrfam": "IPv4", 00:17:08.160 "traddr": "10.0.0.1", 00:17:08.160 "trsvcid": "54600" 00:17:08.160 }, 00:17:08.160 "auth": { 00:17:08.160 "state": "completed", 00:17:08.160 "digest": "sha512", 00:17:08.160 "dhgroup": "ffdhe8192" 00:17:08.160 } 00:17:08.160 } 00:17:08.160 ]' 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.160 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.417 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.417 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.417 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.417 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.417 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.674 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:MjdlNzliMjY5NWVlMjJlNGEyZjc1N2Y3ODY5NGFkZWL42jLu: --dhchap-ctrl-secret DHHC-1:02:YTE4ZTA5Y2FmMDg1ODc3YzgwYTgxMjdlY2Q0ZjIxZmVjZGY3MTU3NzU2M2Y2MjI00G3iEg==: 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.605 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.539 00:17:10.539 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.539 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.539 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.798 { 00:17:10.798 "cntlid": 141, 00:17:10.798 "qid": 0, 00:17:10.798 "state": "enabled", 00:17:10.798 "thread": "nvmf_tgt_poll_group_000", 00:17:10.798 "listen_address": { 00:17:10.798 "trtype": "TCP", 00:17:10.798 "adrfam": "IPv4", 00:17:10.798 "traddr": "10.0.0.2", 00:17:10.798 "trsvcid": "4420" 00:17:10.798 }, 00:17:10.798 "peer_address": { 00:17:10.798 "trtype": "TCP", 00:17:10.798 "adrfam": "IPv4", 00:17:10.798 "traddr": "10.0.0.1", 00:17:10.798 "trsvcid": "54626" 00:17:10.798 }, 00:17:10.798 "auth": { 00:17:10.798 "state": "completed", 00:17:10.798 "digest": "sha512", 00:17:10.798 "dhgroup": "ffdhe8192" 00:17:10.798 } 00:17:10.798 } 00:17:10.798 ]' 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.798 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.055 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MGVkOWQ3NTk4ODk3OWM0Y2FkYjBjZmY0NTc5MGM0YmVkNWI4OGI5MDZhYWExODg4jJz3QQ==: --dhchap-ctrl-secret DHHC-1:01:YjE2NDI4YWY4YjkzNGEyMGQ3MmI0Mjk1MDFmODY1MWOvraqq: 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.990 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.556 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.121 00:17:13.379 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.380 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.380 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.638 { 00:17:13.638 "cntlid": 143, 00:17:13.638 "qid": 0, 00:17:13.638 "state": "enabled", 00:17:13.638 "thread": "nvmf_tgt_poll_group_000", 00:17:13.638 "listen_address": { 00:17:13.638 "trtype": "TCP", 00:17:13.638 "adrfam": "IPv4", 00:17:13.638 "traddr": "10.0.0.2", 00:17:13.638 "trsvcid": "4420" 00:17:13.638 }, 00:17:13.638 "peer_address": { 00:17:13.638 "trtype": "TCP", 00:17:13.638 "adrfam": "IPv4", 00:17:13.638 "traddr": "10.0.0.1", 00:17:13.638 "trsvcid": "55272" 00:17:13.638 }, 00:17:13.638 "auth": { 00:17:13.638 "state": "completed", 00:17:13.638 "digest": "sha512", 00:17:13.638 "dhgroup": "ffdhe8192" 00:17:13.638 } 00:17:13.638 } 00:17:13.638 ]' 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.638 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.895 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:14.823 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.080 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.031 00:17:16.031 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.031 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.032 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.288 { 00:17:16.288 "cntlid": 145, 00:17:16.288 "qid": 0, 00:17:16.288 "state": "enabled", 00:17:16.288 "thread": "nvmf_tgt_poll_group_000", 00:17:16.288 "listen_address": { 00:17:16.288 "trtype": "TCP", 00:17:16.288 "adrfam": "IPv4", 00:17:16.288 "traddr": "10.0.0.2", 00:17:16.288 "trsvcid": "4420" 00:17:16.288 }, 00:17:16.288 "peer_address": { 00:17:16.288 "trtype": "TCP", 00:17:16.288 "adrfam": "IPv4", 00:17:16.288 "traddr": "10.0.0.1", 00:17:16.288 "trsvcid": "55302" 00:17:16.288 }, 00:17:16.288 "auth": { 00:17:16.288 "state": "completed", 00:17:16.288 "digest": "sha512", 00:17:16.288 "dhgroup": "ffdhe8192" 00:17:16.288 } 00:17:16.288 } 00:17:16.288 ]' 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.288 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.544 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NzJhODNmMTE3MDI2NTg1ODA2YjMwNjdjNjBkZmIxMTFhNGJlYzEzZjIyNzRiMGJmoI/A2g==: --dhchap-ctrl-secret DHHC-1:03:MTZkNzZhYzgwMWM5MGFkM2EzNGM3YjhjZTExMmVhNjRjMzU1MzQyODQwNjVmYzE3ODEwYzUyODMzZWIyMzMzZigqzgg=: 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:17.472 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.473 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:18.403 request: 00:17:18.403 { 00:17:18.403 "name": "nvme0", 00:17:18.403 "trtype": "tcp", 00:17:18.403 "traddr": "10.0.0.2", 00:17:18.403 "adrfam": "ipv4", 00:17:18.403 "trsvcid": "4420", 00:17:18.403 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:18.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.403 "prchk_reftag": false, 00:17:18.403 "prchk_guard": false, 00:17:18.403 "hdgst": false, 00:17:18.403 "ddgst": false, 00:17:18.404 "dhchap_key": "key2", 00:17:18.404 "method": "bdev_nvme_attach_controller", 00:17:18.404 "req_id": 1 00:17:18.404 } 00:17:18.404 Got JSON-RPC error response 00:17:18.404 response: 00:17:18.404 { 00:17:18.404 "code": -5, 00:17:18.404 "message": "Input/output error" 00:17:18.404 } 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.404 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.967 request: 00:17:18.967 { 00:17:18.967 "name": "nvme0", 00:17:18.967 "trtype": "tcp", 00:17:18.967 "traddr": "10.0.0.2", 00:17:18.967 "adrfam": "ipv4", 00:17:18.967 "trsvcid": "4420", 00:17:18.967 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:18.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.967 "prchk_reftag": false, 00:17:18.967 "prchk_guard": false, 00:17:18.967 "hdgst": false, 00:17:18.967 "ddgst": false, 00:17:18.967 "dhchap_key": "key1", 00:17:18.967 "dhchap_ctrlr_key": "ckey2", 00:17:18.967 "method": "bdev_nvme_attach_controller", 00:17:18.967 "req_id": 1 00:17:18.967 } 00:17:18.967 Got JSON-RPC error response 00:17:18.967 response: 00:17:18.967 { 00:17:18.967 "code": -5, 00:17:18.967 "message": "Input/output error" 00:17:18.967 } 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.244 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.810 request: 00:17:19.810 { 00:17:19.810 "name": "nvme0", 00:17:19.810 "trtype": "tcp", 00:17:19.810 "traddr": "10.0.0.2", 00:17:19.810 "adrfam": "ipv4", 00:17:19.810 "trsvcid": "4420", 00:17:19.810 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:19.810 "prchk_reftag": false, 00:17:19.810 "prchk_guard": false, 00:17:19.810 "hdgst": false, 00:17:19.810 "ddgst": false, 00:17:19.810 "dhchap_key": "key1", 00:17:19.810 "dhchap_ctrlr_key": "ckey1", 00:17:19.810 "method": "bdev_nvme_attach_controller", 00:17:19.810 "req_id": 1 00:17:19.810 } 00:17:19.810 Got JSON-RPC error response 00:17:19.810 response: 00:17:19.810 { 00:17:19.810 "code": -5, 00:17:19.810 "message": "Input/output error" 00:17:19.810 } 00:17:20.067 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3775086 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3775086 ']' 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3775086 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3775086 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3775086' 00:17:20.068 killing process with pid 3775086 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3775086 00:17:20.068 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3775086 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3796966 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3796966 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3796966 ']' 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.326 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3796966 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3796966 ']' 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.584 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.585 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.585 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.843 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.777 00:17:21.777 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.777 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.777 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.033 { 00:17:22.033 "cntlid": 1, 00:17:22.033 "qid": 0, 00:17:22.033 "state": "enabled", 00:17:22.033 "thread": "nvmf_tgt_poll_group_000", 00:17:22.033 "listen_address": { 00:17:22.033 "trtype": "TCP", 00:17:22.033 "adrfam": "IPv4", 00:17:22.033 "traddr": "10.0.0.2", 00:17:22.033 "trsvcid": "4420" 00:17:22.033 }, 00:17:22.033 "peer_address": { 00:17:22.033 "trtype": "TCP", 00:17:22.033 "adrfam": "IPv4", 00:17:22.033 "traddr": "10.0.0.1", 00:17:22.033 "trsvcid": "48466" 00:17:22.033 }, 00:17:22.033 "auth": { 00:17:22.033 "state": "completed", 00:17:22.033 "digest": "sha512", 00:17:22.033 "dhgroup": "ffdhe8192" 00:17:22.033 } 00:17:22.033 } 00:17:22.033 ]' 00:17:22.033 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.033 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.290 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:MzMzNmIzZGYyYTMxY2JiNGQ5Njk2ODZiZGZjNDhiNGJhNGFlYjI4NmRkNDFlYjViMDhhZWZjMTQzYzcyOWMxMxeY0e4=: 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.221 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:23.222 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.479 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.736 request: 00:17:23.736 { 00:17:23.736 "name": "nvme0", 00:17:23.736 "trtype": "tcp", 00:17:23.736 "traddr": "10.0.0.2", 00:17:23.736 "adrfam": "ipv4", 00:17:23.736 "trsvcid": "4420", 00:17:23.736 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:23.736 "prchk_reftag": false, 00:17:23.736 "prchk_guard": false, 00:17:23.736 "hdgst": false, 00:17:23.736 "ddgst": false, 00:17:23.736 "dhchap_key": "key3", 00:17:23.736 "method": "bdev_nvme_attach_controller", 00:17:23.736 "req_id": 1 00:17:23.736 } 00:17:23.736 Got JSON-RPC error response 00:17:23.736 response: 00:17:23.736 { 00:17:23.736 "code": -5, 00:17:23.736 "message": "Input/output error" 00:17:23.736 } 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.736 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.994 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.260 request: 00:17:24.260 { 00:17:24.260 "name": "nvme0", 00:17:24.260 "trtype": "tcp", 00:17:24.260 "traddr": "10.0.0.2", 00:17:24.260 "adrfam": "ipv4", 00:17:24.260 "trsvcid": "4420", 00:17:24.260 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.260 "prchk_reftag": false, 00:17:24.260 "prchk_guard": false, 00:17:24.260 "hdgst": false, 00:17:24.260 "ddgst": false, 00:17:24.260 "dhchap_key": "key3", 00:17:24.260 "method": "bdev_nvme_attach_controller", 00:17:24.260 "req_id": 1 00:17:24.260 } 00:17:24.260 Got JSON-RPC error response 00:17:24.260 response: 00:17:24.260 { 00:17:24.260 "code": -5, 00:17:24.260 "message": "Input/output error" 00:17:24.260 } 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.260 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.557 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.815 request: 00:17:24.815 { 00:17:24.815 "name": "nvme0", 00:17:24.815 "trtype": "tcp", 00:17:24.815 "traddr": "10.0.0.2", 00:17:24.815 "adrfam": "ipv4", 00:17:24.815 "trsvcid": "4420", 00:17:24.815 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.815 "prchk_reftag": false, 00:17:24.815 "prchk_guard": false, 00:17:24.815 "hdgst": false, 00:17:24.815 "ddgst": false, 00:17:24.815 "dhchap_key": "key0", 00:17:24.815 "dhchap_ctrlr_key": "key1", 00:17:24.815 "method": "bdev_nvme_attach_controller", 00:17:24.815 "req_id": 1 00:17:24.815 } 00:17:24.815 Got JSON-RPC error response 00:17:24.815 response: 00:17:24.815 { 00:17:24.815 "code": -5, 00:17:24.815 "message": "Input/output error" 00:17:24.815 } 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:24.815 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:25.074 00:17:25.074 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:25.074 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.074 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:25.331 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.331 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.331 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3775137 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3775137 ']' 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3775137 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3775137 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3775137' 00:17:25.590 killing process with pid 3775137 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3775137 00:17:25.590 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3775137 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.155 rmmod nvme_tcp 00:17:26.155 rmmod nvme_fabrics 00:17:26.155 rmmod nvme_keyring 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3796966 ']' 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3796966 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3796966 ']' 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3796966 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796966 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796966' 00:17:26.155 killing process with pid 3796966 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3796966 00:17:26.155 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3796966 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.413 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.323 23:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.323 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4jb /tmp/spdk.key-sha256.n0s /tmp/spdk.key-sha384.YR5 /tmp/spdk.key-sha512.yn0 /tmp/spdk.key-sha512.tat /tmp/spdk.key-sha384.BnB /tmp/spdk.key-sha256.OXN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:28.323 00:17:28.323 real 3m1.969s 00:17:28.323 user 7m5.488s 00:17:28.323 sys 0m25.380s 00:17:28.323 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.323 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.323 ************************************ 00:17:28.323 END TEST nvmf_auth_target 00:17:28.323 ************************************ 00:17:28.582 23:44:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.582 23:44:03 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:28.582 23:44:03 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.582 23:44:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:28.582 23:44:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.582 23:44:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.582 ************************************ 00:17:28.582 START TEST nvmf_bdevio_no_huge 00:17:28.582 ************************************ 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.583 * Looking for test storage... 00:17:28.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.583 23:44:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:31.113 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:31.113 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:31.113 Found net devices under 0000:09:00.0: cvl_0_0 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:31.113 Found net devices under 0000:09:00.1: cvl_0_1 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:31.113 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:31.114 00:17:31.114 --- 10.0.0.2 ping statistics --- 00:17:31.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.114 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:31.114 00:17:31.114 --- 10.0.0.1 ping statistics --- 00:17:31.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.114 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3799610 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3799610 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3799610 ']' 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.114 23:44:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 [2024-07-15 23:44:05.890833] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:17:31.114 [2024-07-15 23:44:05.890915] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:31.114 [2024-07-15 23:44:05.961911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.114 [2024-07-15 23:44:06.066677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.114 [2024-07-15 23:44:06.066727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.114 [2024-07-15 23:44:06.066755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.114 [2024-07-15 23:44:06.066766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.114 [2024-07-15 23:44:06.066775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.114 [2024-07-15 23:44:06.066860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:31.114 [2024-07-15 23:44:06.066921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:31.114 [2024-07-15 23:44:06.067055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:31.114 [2024-07-15 23:44:06.067059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 [2024-07-15 23:44:06.182541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 Malloc0 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:31.114 [2024-07-15 23:44:06.220183] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.114 { 00:17:31.114 "params": { 00:17:31.114 "name": "Nvme$subsystem", 00:17:31.114 "trtype": "$TEST_TRANSPORT", 00:17:31.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.114 "adrfam": "ipv4", 00:17:31.114 "trsvcid": "$NVMF_PORT", 00:17:31.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.114 "hdgst": ${hdgst:-false}, 00:17:31.114 "ddgst": ${ddgst:-false} 00:17:31.114 }, 00:17:31.114 "method": "bdev_nvme_attach_controller" 00:17:31.114 } 00:17:31.114 EOF 00:17:31.114 )") 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:31.114 23:44:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.114 "params": { 00:17:31.114 "name": "Nvme1", 00:17:31.114 "trtype": "tcp", 00:17:31.114 "traddr": "10.0.0.2", 00:17:31.114 "adrfam": "ipv4", 00:17:31.114 "trsvcid": "4420", 00:17:31.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.114 "hdgst": false, 00:17:31.114 "ddgst": false 00:17:31.114 }, 00:17:31.114 "method": "bdev_nvme_attach_controller" 00:17:31.114 }' 00:17:31.372 [2024-07-15 23:44:06.264598] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:17:31.372 [2024-07-15 23:44:06.264678] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3799647 ] 00:17:31.372 [2024-07-15 23:44:06.331750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:31.372 [2024-07-15 23:44:06.444427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.372 [2024-07-15 23:44:06.444475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.372 [2024-07-15 23:44:06.444478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.630 I/O targets: 00:17:31.630 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:31.630 00:17:31.630 00:17:31.630 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.630 http://cunit.sourceforge.net/ 00:17:31.630 00:17:31.630 00:17:31.630 Suite: bdevio tests on: Nvme1n1 00:17:31.630 Test: blockdev write read block ...passed 00:17:31.630 Test: blockdev write zeroes read block ...passed 00:17:31.630 Test: blockdev write zeroes read no split ...passed 00:17:31.887 Test: blockdev write zeroes read split ...passed 00:17:31.887 Test: blockdev write zeroes read split partial ...passed 00:17:31.887 Test: blockdev reset ...[2024-07-15 23:44:06.807372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:31.887 [2024-07-15 23:44:06.807480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccdfb0 (9): Bad file descriptor 00:17:31.887 [2024-07-15 23:44:06.945596] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:31.887 passed 00:17:31.887 Test: blockdev write read 8 blocks ...passed 00:17:31.887 Test: blockdev write read size > 128k ...passed 00:17:31.887 Test: blockdev write read invalid size ...passed 00:17:31.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:31.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:31.887 Test: blockdev write read max offset ...passed 00:17:32.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:32.144 Test: blockdev writev readv 8 blocks ...passed 00:17:32.144 Test: blockdev writev readv 30 x 1block ...passed 00:17:32.144 Test: blockdev writev readv block ...passed 00:17:32.144 Test: blockdev writev readv size > 128k ...passed 00:17:32.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:32.144 Test: blockdev comparev and writev ...[2024-07-15 23:44:07.161285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.161321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.161346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.161364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.161707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.161731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.161754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.161771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.162122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.162146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.162168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.162185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.162513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.162536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.162559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.144 [2024-07-15 23:44:07.162581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:32.144 passed 00:17:32.144 Test: blockdev nvme passthru rw ...passed 00:17:32.144 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:44:07.246252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.144 [2024-07-15 23:44:07.246278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.246428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.144 [2024-07-15 23:44:07.246451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.246594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.144 [2024-07-15 23:44:07.246617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:32.144 [2024-07-15 23:44:07.246766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.144 [2024-07-15 23:44:07.246789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:32.144 passed 00:17:32.144 Test: blockdev nvme admin passthru ...passed 00:17:32.401 Test: blockdev copy ...passed 00:17:32.401 00:17:32.401 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.401 suites 1 1 n/a 0 0 00:17:32.401 tests 23 23 23 0 0 00:17:32.401 asserts 152 152 152 0 n/a 00:17:32.401 00:17:32.401 Elapsed time = 1.384 seconds 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.660 rmmod nvme_tcp 00:17:32.660 rmmod nvme_fabrics 00:17:32.660 rmmod nvme_keyring 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3799610 ']' 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3799610 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3799610 ']' 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3799610 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799610 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799610' 00:17:32.660 killing process with pid 3799610 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3799610 00:17:32.660 23:44:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3799610 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.228 23:44:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.133 23:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.133 00:17:35.133 real 0m6.709s 00:17:35.133 user 0m10.927s 00:17:35.133 sys 0m2.627s 00:17:35.133 23:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.133 23:44:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.133 ************************************ 00:17:35.133 END TEST nvmf_bdevio_no_huge 00:17:35.133 ************************************ 00:17:35.133 23:44:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.133 23:44:10 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:35.133 23:44:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.133 23:44:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.133 23:44:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.133 ************************************ 00:17:35.133 START TEST nvmf_tls 00:17:35.133 ************************************ 00:17:35.133 23:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:35.390 * Looking for test storage... 00:17:35.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.390 23:44:10 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.391 23:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:37.292 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:37.292 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:37.292 Found net devices under 0000:09:00.0: cvl_0_0 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:37.292 Found net devices under 0000:09:00.1: cvl_0_1 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.292 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.293 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.293 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.293 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:17:37.551 00:17:37.551 --- 10.0.0.2 ping statistics --- 00:17:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.551 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:17:37.551 00:17:37.551 --- 10.0.0.1 ping statistics --- 00:17:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.551 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3801836 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3801836 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3801836 ']' 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.551 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.551 [2024-07-15 23:44:12.610862] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:17:37.551 [2024-07-15 23:44:12.610937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.551 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.810 [2024-07-15 23:44:12.675881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.810 [2024-07-15 23:44:12.781122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.810 [2024-07-15 23:44:12.781176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.810 [2024-07-15 23:44:12.781198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.810 [2024-07-15 23:44:12.781209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.810 [2024-07-15 23:44:12.781218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.810 [2024-07-15 23:44:12.781258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:37.810 23:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:38.067 true 00:17:38.067 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.067 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:38.323 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:38.323 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:38.323 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:38.579 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.579 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:38.836 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:38.836 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:38.836 23:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:39.092 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.092 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:39.348 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:39.349 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:39.349 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.349 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:39.605 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:39.605 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:39.605 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:39.863 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.863 23:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:40.121 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:40.121 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:40.121 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:40.378 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:40.378 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tLoC807Z0B 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GCuKgIgJWW 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tLoC807Z0B 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GCuKgIgJWW 00:17:40.637 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:40.895 23:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:41.154 23:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tLoC807Z0B 00:17:41.154 23:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tLoC807Z0B 00:17:41.154 23:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.413 [2024-07-15 23:44:16.501909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.413 23:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.671 23:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.929 [2024-07-15 23:44:16.999416] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.929 [2024-07-15 23:44:16.999675] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.929 23:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.206 malloc0 00:17:42.206 23:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.773 23:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tLoC807Z0B 00:17:42.773 [2024-07-15 23:44:17.841299] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:42.773 23:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tLoC807Z0B 00:17:42.773 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.033 Initializing NVMe Controllers 00:17:55.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.033 Initialization complete. Launching workers. 00:17:55.033 ======================================================== 00:17:55.033 Latency(us) 00:17:55.033 Device Information : IOPS MiB/s Average min max 00:17:55.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8736.57 34.13 7327.44 1146.40 8659.43 00:17:55.033 ======================================================== 00:17:55.033 Total : 8736.57 34.13 7327.44 1146.40 8659.43 00:17:55.033 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tLoC807Z0B 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tLoC807Z0B' 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3803609 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3803609 /var/tmp/bdevperf.sock 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3803609 ']' 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.033 23:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.033 [2024-07-15 23:44:27.996995] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:17:55.034 [2024-07-15 23:44:27.997073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803609 ] 00:17:55.034 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.034 [2024-07-15 23:44:28.053044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.034 [2024-07-15 23:44:28.157778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.034 23:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.034 23:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.034 23:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tLoC807Z0B 00:17:55.034 [2024-07-15 23:44:28.525587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.034 [2024-07-15 23:44:28.525687] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.034 TLSTESTn1 00:17:55.034 23:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:55.034 Running I/O for 10 seconds... 00:18:04.998 00:18:04.998 Latency(us) 00:18:04.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.998 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.998 Verification LBA range: start 0x0 length 0x2000 00:18:04.998 TLSTESTn1 : 10.02 3560.71 13.91 0.00 0.00 35885.75 8980.86 35729.26 00:18:04.998 =================================================================================================================== 00:18:04.998 Total : 3560.71 13.91 0.00 0.00 35885.75 8980.86 35729.26 00:18:04.998 0 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3803609 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3803609 ']' 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3803609 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803609 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803609' 00:18:04.998 killing process with pid 3803609 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3803609 00:18:04.998 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.998 00:18:04.998 Latency(us) 00:18:04.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.998 =================================================================================================================== 00:18:04.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.998 [2024-07-15 23:44:38.797667] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.998 23:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3803609 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCuKgIgJWW 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCuKgIgJWW 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCuKgIgJWW 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GCuKgIgJWW' 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3804928 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3804928 /var/tmp/bdevperf.sock 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3804928 ']' 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.998 [2024-07-15 23:44:39.111524] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:04.998 [2024-07-15 23:44:39.111611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804928 ] 00:18:04.998 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.998 [2024-07-15 23:44:39.168406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.998 [2024-07-15 23:44:39.270597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.998 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GCuKgIgJWW 00:18:04.998 [2024-07-15 23:44:39.648396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.998 [2024-07-15 23:44:39.648514] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:04.998 [2024-07-15 23:44:39.656439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.998 [2024-07-15 23:44:39.656477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f69f90 (107): Transport endpoint is not connected 00:18:04.998 [2024-07-15 23:44:39.657429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f69f90 (9): Bad file descriptor 00:18:04.998 [2024-07-15 23:44:39.658428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.998 [2024-07-15 23:44:39.658447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.998 [2024-07-15 23:44:39.658463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.998 request: 00:18:04.998 { 00:18:04.998 "name": "TLSTEST", 00:18:04.999 "trtype": "tcp", 00:18:04.999 "traddr": "10.0.0.2", 00:18:04.999 "adrfam": "ipv4", 00:18:04.999 "trsvcid": "4420", 00:18:04.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.999 "prchk_reftag": false, 00:18:04.999 "prchk_guard": false, 00:18:04.999 "hdgst": false, 00:18:04.999 "ddgst": false, 00:18:04.999 "psk": "/tmp/tmp.GCuKgIgJWW", 00:18:04.999 "method": "bdev_nvme_attach_controller", 00:18:04.999 "req_id": 1 00:18:04.999 } 00:18:04.999 Got JSON-RPC error response 00:18:04.999 response: 00:18:04.999 { 00:18:04.999 "code": -5, 00:18:04.999 "message": "Input/output error" 00:18:04.999 } 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3804928 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3804928 ']' 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3804928 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3804928 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3804928' 00:18:04.999 killing process with pid 3804928 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3804928 00:18:04.999 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.999 00:18:04.999 Latency(us) 00:18:04.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.999 =================================================================================================================== 00:18:04.999 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.999 [2024-07-15 23:44:39.706558] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3804928 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tLoC807Z0B 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tLoC807Z0B 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tLoC807Z0B 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tLoC807Z0B' 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3805063 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3805063 /var/tmp/bdevperf.sock 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805063 ']' 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.999 23:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.999 [2024-07-15 23:44:40.004598] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:04.999 [2024-07-15 23:44:40.004723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805063 ] 00:18:04.999 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.999 [2024-07-15 23:44:40.066570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.257 [2024-07-15 23:44:40.175907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.257 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.257 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.257 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tLoC807Z0B 00:18:05.516 [2024-07-15 23:44:40.564149] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.516 [2024-07-15 23:44:40.564282] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:05.516 [2024-07-15 23:44:40.569626] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:05.516 [2024-07-15 23:44:40.569659] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:05.516 [2024-07-15 23:44:40.569699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:05.516 [2024-07-15 23:44:40.570182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166f90 (107): Transport endpoint is not connected 00:18:05.516 [2024-07-15 23:44:40.571168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1166f90 (9): Bad file descriptor 00:18:05.516 [2024-07-15 23:44:40.572167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:05.516 [2024-07-15 23:44:40.572189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:05.516 [2024-07-15 23:44:40.572207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:05.516 request: 00:18:05.516 { 00:18:05.516 "name": "TLSTEST", 00:18:05.516 "trtype": "tcp", 00:18:05.516 "traddr": "10.0.0.2", 00:18:05.516 "adrfam": "ipv4", 00:18:05.516 "trsvcid": "4420", 00:18:05.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.516 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:05.516 "prchk_reftag": false, 00:18:05.516 "prchk_guard": false, 00:18:05.516 "hdgst": false, 00:18:05.516 "ddgst": false, 00:18:05.516 "psk": "/tmp/tmp.tLoC807Z0B", 00:18:05.516 "method": "bdev_nvme_attach_controller", 00:18:05.516 "req_id": 1 00:18:05.516 } 00:18:05.516 Got JSON-RPC error response 00:18:05.516 response: 00:18:05.516 { 00:18:05.516 "code": -5, 00:18:05.516 "message": "Input/output error" 00:18:05.516 } 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3805063 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805063 ']' 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805063 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805063 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805063' 00:18:05.516 killing process with pid 3805063 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805063 00:18:05.516 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.516 00:18:05.516 Latency(us) 00:18:05.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.516 =================================================================================================================== 00:18:05.516 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.516 [2024-07-15 23:44:40.625015] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:05.516 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805063 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tLoC807Z0B 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tLoC807Z0B 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:05.774 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tLoC807Z0B 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tLoC807Z0B' 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3805154 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3805154 /var/tmp/bdevperf.sock 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805154 ']' 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.775 23:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.033 [2024-07-15 23:44:40.926559] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:06.033 [2024-07-15 23:44:40.926638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805154 ] 00:18:06.033 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.033 [2024-07-15 23:44:40.985368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.033 [2024-07-15 23:44:41.090806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.291 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.291 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.291 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tLoC807Z0B 00:18:06.550 [2024-07-15 23:44:41.474203] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.550 [2024-07-15 23:44:41.474317] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.550 [2024-07-15 23:44:41.481044] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:06.551 [2024-07-15 23:44:41.481076] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:06.551 [2024-07-15 23:44:41.481127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:06.551 [2024-07-15 23:44:41.481292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce1f90 (107): Transport endpoint is not connected 00:18:06.551 [2024-07-15 23:44:41.482282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce1f90 (9): Bad file descriptor 00:18:06.551 [2024-07-15 23:44:41.483282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:06.551 [2024-07-15 23:44:41.483318] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:06.551 [2024-07-15 23:44:41.483334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:06.551 request: 00:18:06.551 { 00:18:06.551 "name": "TLSTEST", 00:18:06.551 "trtype": "tcp", 00:18:06.551 "traddr": "10.0.0.2", 00:18:06.551 "adrfam": "ipv4", 00:18:06.551 "trsvcid": "4420", 00:18:06.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:06.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.551 "prchk_reftag": false, 00:18:06.551 "prchk_guard": false, 00:18:06.551 "hdgst": false, 00:18:06.551 "ddgst": false, 00:18:06.551 "psk": "/tmp/tmp.tLoC807Z0B", 00:18:06.551 "method": "bdev_nvme_attach_controller", 00:18:06.551 "req_id": 1 00:18:06.551 } 00:18:06.551 Got JSON-RPC error response 00:18:06.551 response: 00:18:06.551 { 00:18:06.551 "code": -5, 00:18:06.551 "message": "Input/output error" 00:18:06.551 } 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3805154 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805154 ']' 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805154 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805154 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805154' 00:18:06.551 killing process with pid 3805154 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805154 00:18:06.551 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.551 00:18:06.551 Latency(us) 00:18:06.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.551 =================================================================================================================== 00:18:06.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.551 [2024-07-15 23:44:41.533355] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:06.551 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805154 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3805225 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3805225 /var/tmp/bdevperf.sock 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805225 ']' 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.809 23:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.809 [2024-07-15 23:44:41.836715] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:06.809 [2024-07-15 23:44:41.836801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805225 ] 00:18:06.810 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.810 [2024-07-15 23:44:41.895041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.068 [2024-07-15 23:44:42.005111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.068 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.068 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:07.068 23:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:07.325 [2024-07-15 23:44:42.353419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:07.325 [2024-07-15 23:44:42.354835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1830770 (9): Bad file descriptor 00:18:07.325 [2024-07-15 23:44:42.355830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:07.325 [2024-07-15 23:44:42.355855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:07.325 [2024-07-15 23:44:42.355872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.325 request: 00:18:07.325 { 00:18:07.325 "name": "TLSTEST", 00:18:07.325 "trtype": "tcp", 00:18:07.325 "traddr": "10.0.0.2", 00:18:07.325 "adrfam": "ipv4", 00:18:07.325 "trsvcid": "4420", 00:18:07.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.326 "prchk_reftag": false, 00:18:07.326 "prchk_guard": false, 00:18:07.326 "hdgst": false, 00:18:07.326 "ddgst": false, 00:18:07.326 "method": "bdev_nvme_attach_controller", 00:18:07.326 "req_id": 1 00:18:07.326 } 00:18:07.326 Got JSON-RPC error response 00:18:07.326 response: 00:18:07.326 { 00:18:07.326 "code": -5, 00:18:07.326 "message": "Input/output error" 00:18:07.326 } 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3805225 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805225 ']' 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805225 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805225 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805225' 00:18:07.326 killing process with pid 3805225 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805225 00:18:07.326 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.326 00:18:07.326 Latency(us) 00:18:07.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.326 =================================================================================================================== 00:18:07.326 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.326 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805225 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3801836 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3801836 ']' 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3801836 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801836 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801836' 00:18:07.583 killing process with pid 3801836 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3801836 00:18:07.583 [2024-07-15 23:44:42.690412] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:07.583 23:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3801836 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:07.841 23:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JMaqrwsgzY 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JMaqrwsgzY 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3805371 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3805371 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805371 ']' 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.100 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.100 [2024-07-15 23:44:43.073950] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:08.100 [2024-07-15 23:44:43.074057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.100 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.100 [2024-07-15 23:44:43.139627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.356 [2024-07-15 23:44:43.247720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.356 [2024-07-15 23:44:43.247788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.356 [2024-07-15 23:44:43.247811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.356 [2024-07-15 23:44:43.247822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.356 [2024-07-15 23:44:43.247831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.356 [2024-07-15 23:44:43.247857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JMaqrwsgzY 00:18:08.356 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:08.613 [2024-07-15 23:44:43.657586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.613 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:08.871 23:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:09.128 [2024-07-15 23:44:44.239128] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.128 [2024-07-15 23:44:44.239364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.387 23:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:09.387 malloc0 00:18:09.644 23:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:09.901 23:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:10.159 [2024-07-15 23:44:45.080536] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMaqrwsgzY 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JMaqrwsgzY' 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3805659 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3805659 /var/tmp/bdevperf.sock 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805659 ']' 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.159 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.159 [2024-07-15 23:44:45.146315] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:10.159 [2024-07-15 23:44:45.146402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805659 ] 00:18:10.159 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.159 [2024-07-15 23:44:45.204405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.417 [2024-07-15 23:44:45.312591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.417 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.417 23:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.417 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:10.674 [2024-07-15 23:44:45.651603] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.674 [2024-07-15 23:44:45.651724] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:10.674 TLSTESTn1 00:18:10.674 23:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:10.932 Running I/O for 10 seconds... 00:18:20.893 00:18:20.893 Latency(us) 00:18:20.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.893 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:20.893 Verification LBA range: start 0x0 length 0x2000 00:18:20.893 TLSTESTn1 : 10.04 3350.13 13.09 0.00 0.00 38122.53 5898.24 42913.94 00:18:20.893 =================================================================================================================== 00:18:20.893 Total : 3350.13 13.09 0.00 0.00 38122.53 5898.24 42913.94 00:18:20.893 0 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3805659 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805659 ']' 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805659 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805659 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805659' 00:18:20.893 killing process with pid 3805659 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805659 00:18:20.893 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.893 00:18:20.893 Latency(us) 00:18:20.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.893 =================================================================================================================== 00:18:20.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.893 [2024-07-15 23:44:55.942925] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:20.893 23:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805659 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JMaqrwsgzY 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMaqrwsgzY 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMaqrwsgzY 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JMaqrwsgzY 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JMaqrwsgzY' 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3806988 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3806988 /var/tmp/bdevperf.sock 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806988 ']' 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.151 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 [2024-07-15 23:44:56.256118] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:21.151 [2024-07-15 23:44:56.256204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806988 ] 00:18:21.410 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.410 [2024-07-15 23:44:56.314626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.410 [2024-07-15 23:44:56.417056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.410 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.410 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:21.410 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:22.003 [2024-07-15 23:44:56.799603] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.003 [2024-07-15 23:44:56.799676] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:22.003 [2024-07-15 23:44:56.799691] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JMaqrwsgzY 00:18:22.003 request: 00:18:22.003 { 00:18:22.003 "name": "TLSTEST", 00:18:22.003 "trtype": "tcp", 00:18:22.003 "traddr": "10.0.0.2", 00:18:22.003 "adrfam": "ipv4", 00:18:22.003 "trsvcid": "4420", 00:18:22.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.003 "prchk_reftag": false, 00:18:22.003 "prchk_guard": false, 00:18:22.003 "hdgst": false, 00:18:22.003 "ddgst": false, 00:18:22.003 "psk": "/tmp/tmp.JMaqrwsgzY", 00:18:22.003 "method": "bdev_nvme_attach_controller", 00:18:22.003 "req_id": 1 00:18:22.003 } 00:18:22.003 Got JSON-RPC error response 00:18:22.003 response: 00:18:22.003 { 00:18:22.003 "code": -1, 00:18:22.003 "message": "Operation not permitted" 00:18:22.003 } 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3806988 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806988 ']' 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806988 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806988 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806988' 00:18:22.003 killing process with pid 3806988 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806988 00:18:22.003 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.003 00:18:22.003 Latency(us) 00:18:22.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.003 =================================================================================================================== 00:18:22.003 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.003 23:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806988 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3805371 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805371 ']' 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805371 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.003 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805371 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805371' 00:18:22.306 killing process with pid 3805371 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805371 00:18:22.306 [2024-07-15 23:44:57.105358] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805371 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3807134 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3807134 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3807134 ']' 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.306 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.597 [2024-07-15 23:44:57.408756] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:22.597 [2024-07-15 23:44:57.408848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.597 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.597 [2024-07-15 23:44:57.474208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.597 [2024-07-15 23:44:57.583897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.597 [2024-07-15 23:44:57.583984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.597 [2024-07-15 23:44:57.584013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.597 [2024-07-15 23:44:57.584026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.597 [2024-07-15 23:44:57.584036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.597 [2024-07-15 23:44:57.584067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.597 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.597 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:22.597 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.597 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:22.598 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JMaqrwsgzY 00:18:22.855 23:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.113 [2024-07-15 23:44:57.998983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.113 23:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.370 23:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.627 [2024-07-15 23:44:58.512340] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.627 [2024-07-15 23:44:58.512553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.627 23:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.885 malloc0 00:18:23.885 23:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:24.143 [2024-07-15 23:44:59.236359] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:24.143 [2024-07-15 23:44:59.236397] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:24.143 [2024-07-15 23:44:59.236436] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:24.143 request: 00:18:24.143 { 00:18:24.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.143 "host": "nqn.2016-06.io.spdk:host1", 00:18:24.143 "psk": "/tmp/tmp.JMaqrwsgzY", 00:18:24.143 "method": "nvmf_subsystem_add_host", 00:18:24.143 "req_id": 1 00:18:24.143 } 00:18:24.143 Got JSON-RPC error response 00:18:24.143 response: 00:18:24.143 { 00:18:24.143 "code": -32603, 00:18:24.143 "message": "Internal error" 00:18:24.143 } 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3807134 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3807134 ']' 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3807134 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.143 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807134 00:18:24.401 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:24.401 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:24.401 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807134' 00:18:24.401 killing process with pid 3807134 00:18:24.401 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3807134 00:18:24.401 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3807134 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JMaqrwsgzY 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3807430 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3807430 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3807430 ']' 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.660 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.660 [2024-07-15 23:44:59.606354] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:24.660 [2024-07-15 23:44:59.606434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.660 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.660 [2024-07-15 23:44:59.670230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.660 [2024-07-15 23:44:59.779037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.660 [2024-07-15 23:44:59.779104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.660 [2024-07-15 23:44:59.779117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.660 [2024-07-15 23:44:59.779135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.660 [2024-07-15 23:44:59.779145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.660 [2024-07-15 23:44:59.779173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JMaqrwsgzY 00:18:24.918 23:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.176 [2024-07-15 23:45:00.157134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.176 23:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:25.434 23:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.692 [2024-07-15 23:45:00.674487] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.692 [2024-07-15 23:45:00.674739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.692 23:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.950 malloc0 00:18:25.950 23:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.208 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:26.466 [2024-07-15 23:45:01.451762] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3807797 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3807797 /var/tmp/bdevperf.sock 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3807797 ']' 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.466 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.466 [2024-07-15 23:45:01.517367] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:26.466 [2024-07-15 23:45:01.517457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807797 ] 00:18:26.466 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.466 [2024-07-15 23:45:01.576443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.724 [2024-07-15 23:45:01.692127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.724 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.724 23:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:26.724 23:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:26.981 [2024-07-15 23:45:02.021440] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.981 [2024-07-15 23:45:02.021549] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:26.981 TLSTESTn1 00:18:27.239 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:27.497 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:27.497 "subsystems": [ 00:18:27.497 { 00:18:27.497 "subsystem": "keyring", 00:18:27.497 "config": [] 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "subsystem": "iobuf", 00:18:27.497 "config": [ 00:18:27.497 { 00:18:27.497 "method": "iobuf_set_options", 00:18:27.497 "params": { 00:18:27.497 "small_pool_count": 8192, 00:18:27.497 "large_pool_count": 1024, 00:18:27.497 "small_bufsize": 8192, 00:18:27.497 "large_bufsize": 135168 00:18:27.497 } 00:18:27.497 } 00:18:27.497 ] 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "subsystem": "sock", 00:18:27.497 "config": [ 00:18:27.497 { 00:18:27.497 "method": "sock_set_default_impl", 00:18:27.497 "params": { 00:18:27.497 "impl_name": "posix" 00:18:27.497 } 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "method": "sock_impl_set_options", 00:18:27.497 "params": { 00:18:27.497 "impl_name": "ssl", 00:18:27.497 "recv_buf_size": 4096, 00:18:27.497 "send_buf_size": 4096, 00:18:27.497 "enable_recv_pipe": true, 00:18:27.497 "enable_quickack": false, 00:18:27.497 "enable_placement_id": 0, 00:18:27.497 "enable_zerocopy_send_server": true, 00:18:27.497 "enable_zerocopy_send_client": false, 00:18:27.497 "zerocopy_threshold": 0, 00:18:27.497 "tls_version": 0, 00:18:27.497 "enable_ktls": false 00:18:27.497 } 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "method": "sock_impl_set_options", 00:18:27.497 "params": { 00:18:27.497 "impl_name": "posix", 00:18:27.497 "recv_buf_size": 2097152, 00:18:27.497 "send_buf_size": 2097152, 00:18:27.497 "enable_recv_pipe": true, 00:18:27.497 "enable_quickack": false, 00:18:27.497 "enable_placement_id": 0, 00:18:27.497 "enable_zerocopy_send_server": true, 00:18:27.497 "enable_zerocopy_send_client": false, 00:18:27.497 "zerocopy_threshold": 0, 00:18:27.497 "tls_version": 0, 00:18:27.497 "enable_ktls": false 00:18:27.497 } 00:18:27.497 } 00:18:27.497 ] 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "subsystem": "vmd", 00:18:27.497 "config": [] 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "subsystem": "accel", 00:18:27.497 "config": [ 00:18:27.497 { 00:18:27.497 "method": "accel_set_options", 00:18:27.497 "params": { 00:18:27.497 "small_cache_size": 128, 00:18:27.497 "large_cache_size": 16, 00:18:27.497 "task_count": 2048, 00:18:27.497 "sequence_count": 2048, 00:18:27.497 "buf_count": 2048 00:18:27.497 } 00:18:27.497 } 00:18:27.497 ] 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "subsystem": "bdev", 00:18:27.497 "config": [ 00:18:27.497 { 00:18:27.497 "method": "bdev_set_options", 00:18:27.497 "params": { 00:18:27.497 "bdev_io_pool_size": 65535, 00:18:27.497 "bdev_io_cache_size": 256, 00:18:27.497 "bdev_auto_examine": true, 00:18:27.497 "iobuf_small_cache_size": 128, 00:18:27.497 "iobuf_large_cache_size": 16 00:18:27.497 } 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "method": "bdev_raid_set_options", 00:18:27.497 "params": { 00:18:27.497 "process_window_size_kb": 1024 00:18:27.497 } 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "method": "bdev_iscsi_set_options", 00:18:27.497 "params": { 00:18:27.497 "timeout_sec": 30 00:18:27.497 } 00:18:27.497 }, 00:18:27.497 { 00:18:27.497 "method": "bdev_nvme_set_options", 00:18:27.497 "params": { 00:18:27.497 "action_on_timeout": "none", 00:18:27.497 "timeout_us": 0, 00:18:27.497 "timeout_admin_us": 0, 00:18:27.497 "keep_alive_timeout_ms": 10000, 00:18:27.497 "arbitration_burst": 0, 00:18:27.497 "low_priority_weight": 0, 00:18:27.497 "medium_priority_weight": 0, 00:18:27.497 "high_priority_weight": 0, 00:18:27.497 "nvme_adminq_poll_period_us": 10000, 00:18:27.497 "nvme_ioq_poll_period_us": 0, 00:18:27.497 "io_queue_requests": 0, 00:18:27.497 "delay_cmd_submit": true, 00:18:27.497 "transport_retry_count": 4, 00:18:27.497 "bdev_retry_count": 3, 00:18:27.497 "transport_ack_timeout": 0, 00:18:27.497 "ctrlr_loss_timeout_sec": 0, 00:18:27.497 "reconnect_delay_sec": 0, 00:18:27.498 "fast_io_fail_timeout_sec": 0, 00:18:27.498 "disable_auto_failback": false, 00:18:27.498 "generate_uuids": false, 00:18:27.498 "transport_tos": 0, 00:18:27.498 "nvme_error_stat": false, 00:18:27.498 "rdma_srq_size": 0, 00:18:27.498 "io_path_stat": false, 00:18:27.498 "allow_accel_sequence": false, 00:18:27.498 "rdma_max_cq_size": 0, 00:18:27.498 "rdma_cm_event_timeout_ms": 0, 00:18:27.498 "dhchap_digests": [ 00:18:27.498 "sha256", 00:18:27.498 "sha384", 00:18:27.498 "sha512" 00:18:27.498 ], 00:18:27.498 "dhchap_dhgroups": [ 00:18:27.498 "null", 00:18:27.498 "ffdhe2048", 00:18:27.498 "ffdhe3072", 00:18:27.498 "ffdhe4096", 00:18:27.498 "ffdhe6144", 00:18:27.498 "ffdhe8192" 00:18:27.498 ] 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "bdev_nvme_set_hotplug", 00:18:27.498 "params": { 00:18:27.498 "period_us": 100000, 00:18:27.498 "enable": false 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "bdev_malloc_create", 00:18:27.498 "params": { 00:18:27.498 "name": "malloc0", 00:18:27.498 "num_blocks": 8192, 00:18:27.498 "block_size": 4096, 00:18:27.498 "physical_block_size": 4096, 00:18:27.498 "uuid": "b678a2e9-933e-48bf-b436-3007ac00090f", 00:18:27.498 "optimal_io_boundary": 0 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "bdev_wait_for_examine" 00:18:27.498 } 00:18:27.498 ] 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "subsystem": "nbd", 00:18:27.498 "config": [] 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "subsystem": "scheduler", 00:18:27.498 "config": [ 00:18:27.498 { 00:18:27.498 "method": "framework_set_scheduler", 00:18:27.498 "params": { 00:18:27.498 "name": "static" 00:18:27.498 } 00:18:27.498 } 00:18:27.498 ] 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "subsystem": "nvmf", 00:18:27.498 "config": [ 00:18:27.498 { 00:18:27.498 "method": "nvmf_set_config", 00:18:27.498 "params": { 00:18:27.498 "discovery_filter": "match_any", 00:18:27.498 "admin_cmd_passthru": { 00:18:27.498 "identify_ctrlr": false 00:18:27.498 } 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_set_max_subsystems", 00:18:27.498 "params": { 00:18:27.498 "max_subsystems": 1024 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_set_crdt", 00:18:27.498 "params": { 00:18:27.498 "crdt1": 0, 00:18:27.498 "crdt2": 0, 00:18:27.498 "crdt3": 0 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_create_transport", 00:18:27.498 "params": { 00:18:27.498 "trtype": "TCP", 00:18:27.498 "max_queue_depth": 128, 00:18:27.498 "max_io_qpairs_per_ctrlr": 127, 00:18:27.498 "in_capsule_data_size": 4096, 00:18:27.498 "max_io_size": 131072, 00:18:27.498 "io_unit_size": 131072, 00:18:27.498 "max_aq_depth": 128, 00:18:27.498 "num_shared_buffers": 511, 00:18:27.498 "buf_cache_size": 4294967295, 00:18:27.498 "dif_insert_or_strip": false, 00:18:27.498 "zcopy": false, 00:18:27.498 "c2h_success": false, 00:18:27.498 "sock_priority": 0, 00:18:27.498 "abort_timeout_sec": 1, 00:18:27.498 "ack_timeout": 0, 00:18:27.498 "data_wr_pool_size": 0 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_create_subsystem", 00:18:27.498 "params": { 00:18:27.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.498 "allow_any_host": false, 00:18:27.498 "serial_number": "SPDK00000000000001", 00:18:27.498 "model_number": "SPDK bdev Controller", 00:18:27.498 "max_namespaces": 10, 00:18:27.498 "min_cntlid": 1, 00:18:27.498 "max_cntlid": 65519, 00:18:27.498 "ana_reporting": false 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_subsystem_add_host", 00:18:27.498 "params": { 00:18:27.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.498 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.498 "psk": "/tmp/tmp.JMaqrwsgzY" 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_subsystem_add_ns", 00:18:27.498 "params": { 00:18:27.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.498 "namespace": { 00:18:27.498 "nsid": 1, 00:18:27.498 "bdev_name": "malloc0", 00:18:27.498 "nguid": "B678A2E9933E48BFB4363007AC00090F", 00:18:27.498 "uuid": "b678a2e9-933e-48bf-b436-3007ac00090f", 00:18:27.498 "no_auto_visible": false 00:18:27.498 } 00:18:27.498 } 00:18:27.498 }, 00:18:27.498 { 00:18:27.498 "method": "nvmf_subsystem_add_listener", 00:18:27.498 "params": { 00:18:27.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.498 "listen_address": { 00:18:27.498 "trtype": "TCP", 00:18:27.499 "adrfam": "IPv4", 00:18:27.499 "traddr": "10.0.0.2", 00:18:27.499 "trsvcid": "4420" 00:18:27.499 }, 00:18:27.499 "secure_channel": true 00:18:27.499 } 00:18:27.499 } 00:18:27.499 ] 00:18:27.499 } 00:18:27.499 ] 00:18:27.499 }' 00:18:27.499 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:27.757 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:27.757 "subsystems": [ 00:18:27.757 { 00:18:27.757 "subsystem": "keyring", 00:18:27.757 "config": [] 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "subsystem": "iobuf", 00:18:27.757 "config": [ 00:18:27.757 { 00:18:27.757 "method": "iobuf_set_options", 00:18:27.757 "params": { 00:18:27.757 "small_pool_count": 8192, 00:18:27.757 "large_pool_count": 1024, 00:18:27.757 "small_bufsize": 8192, 00:18:27.757 "large_bufsize": 135168 00:18:27.757 } 00:18:27.757 } 00:18:27.757 ] 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "subsystem": "sock", 00:18:27.757 "config": [ 00:18:27.757 { 00:18:27.757 "method": "sock_set_default_impl", 00:18:27.757 "params": { 00:18:27.757 "impl_name": "posix" 00:18:27.757 } 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "method": "sock_impl_set_options", 00:18:27.757 "params": { 00:18:27.757 "impl_name": "ssl", 00:18:27.757 "recv_buf_size": 4096, 00:18:27.757 "send_buf_size": 4096, 00:18:27.757 "enable_recv_pipe": true, 00:18:27.757 "enable_quickack": false, 00:18:27.757 "enable_placement_id": 0, 00:18:27.757 "enable_zerocopy_send_server": true, 00:18:27.757 "enable_zerocopy_send_client": false, 00:18:27.757 "zerocopy_threshold": 0, 00:18:27.757 "tls_version": 0, 00:18:27.757 "enable_ktls": false 00:18:27.757 } 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "method": "sock_impl_set_options", 00:18:27.757 "params": { 00:18:27.757 "impl_name": "posix", 00:18:27.757 "recv_buf_size": 2097152, 00:18:27.757 "send_buf_size": 2097152, 00:18:27.757 "enable_recv_pipe": true, 00:18:27.757 "enable_quickack": false, 00:18:27.757 "enable_placement_id": 0, 00:18:27.757 "enable_zerocopy_send_server": true, 00:18:27.757 "enable_zerocopy_send_client": false, 00:18:27.757 "zerocopy_threshold": 0, 00:18:27.757 "tls_version": 0, 00:18:27.757 "enable_ktls": false 00:18:27.757 } 00:18:27.757 } 00:18:27.757 ] 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "subsystem": "vmd", 00:18:27.757 "config": [] 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "subsystem": "accel", 00:18:27.757 "config": [ 00:18:27.757 { 00:18:27.757 "method": "accel_set_options", 00:18:27.757 "params": { 00:18:27.757 "small_cache_size": 128, 00:18:27.757 "large_cache_size": 16, 00:18:27.757 "task_count": 2048, 00:18:27.757 "sequence_count": 2048, 00:18:27.757 "buf_count": 2048 00:18:27.757 } 00:18:27.757 } 00:18:27.757 ] 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "subsystem": "bdev", 00:18:27.757 "config": [ 00:18:27.757 { 00:18:27.757 "method": "bdev_set_options", 00:18:27.757 "params": { 00:18:27.757 "bdev_io_pool_size": 65535, 00:18:27.757 "bdev_io_cache_size": 256, 00:18:27.757 "bdev_auto_examine": true, 00:18:27.757 "iobuf_small_cache_size": 128, 00:18:27.757 "iobuf_large_cache_size": 16 00:18:27.757 } 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "method": "bdev_raid_set_options", 00:18:27.757 "params": { 00:18:27.757 "process_window_size_kb": 1024 00:18:27.757 } 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "method": "bdev_iscsi_set_options", 00:18:27.757 "params": { 00:18:27.757 "timeout_sec": 30 00:18:27.757 } 00:18:27.757 }, 00:18:27.757 { 00:18:27.757 "method": "bdev_nvme_set_options", 00:18:27.757 "params": { 00:18:27.757 "action_on_timeout": "none", 00:18:27.757 "timeout_us": 0, 00:18:27.757 "timeout_admin_us": 0, 00:18:27.757 "keep_alive_timeout_ms": 10000, 00:18:27.757 "arbitration_burst": 0, 00:18:27.757 "low_priority_weight": 0, 00:18:27.757 "medium_priority_weight": 0, 00:18:27.757 "high_priority_weight": 0, 00:18:27.757 "nvme_adminq_poll_period_us": 10000, 00:18:27.757 "nvme_ioq_poll_period_us": 0, 00:18:27.757 "io_queue_requests": 512, 00:18:27.757 "delay_cmd_submit": true, 00:18:27.757 "transport_retry_count": 4, 00:18:27.758 "bdev_retry_count": 3, 00:18:27.758 "transport_ack_timeout": 0, 00:18:27.758 "ctrlr_loss_timeout_sec": 0, 00:18:27.758 "reconnect_delay_sec": 0, 00:18:27.758 "fast_io_fail_timeout_sec": 0, 00:18:27.758 "disable_auto_failback": false, 00:18:27.758 "generate_uuids": false, 00:18:27.758 "transport_tos": 0, 00:18:27.758 "nvme_error_stat": false, 00:18:27.758 "rdma_srq_size": 0, 00:18:27.758 "io_path_stat": false, 00:18:27.758 "allow_accel_sequence": false, 00:18:27.758 "rdma_max_cq_size": 0, 00:18:27.758 "rdma_cm_event_timeout_ms": 0, 00:18:27.758 "dhchap_digests": [ 00:18:27.758 "sha256", 00:18:27.758 "sha384", 00:18:27.758 "sha512" 00:18:27.758 ], 00:18:27.758 "dhchap_dhgroups": [ 00:18:27.758 "null", 00:18:27.758 "ffdhe2048", 00:18:27.758 "ffdhe3072", 00:18:27.758 "ffdhe4096", 00:18:27.758 "ffdhe6144", 00:18:27.758 "ffdhe8192" 00:18:27.758 ] 00:18:27.758 } 00:18:27.758 }, 00:18:27.758 { 00:18:27.758 "method": "bdev_nvme_attach_controller", 00:18:27.758 "params": { 00:18:27.758 "name": "TLSTEST", 00:18:27.758 "trtype": "TCP", 00:18:27.758 "adrfam": "IPv4", 00:18:27.758 "traddr": "10.0.0.2", 00:18:27.758 "trsvcid": "4420", 00:18:27.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.758 "prchk_reftag": false, 00:18:27.758 "prchk_guard": false, 00:18:27.758 "ctrlr_loss_timeout_sec": 0, 00:18:27.758 "reconnect_delay_sec": 0, 00:18:27.758 "fast_io_fail_timeout_sec": 0, 00:18:27.758 "psk": "/tmp/tmp.JMaqrwsgzY", 00:18:27.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.758 "hdgst": false, 00:18:27.758 "ddgst": false 00:18:27.758 } 00:18:27.758 }, 00:18:27.758 { 00:18:27.758 "method": "bdev_nvme_set_hotplug", 00:18:27.758 "params": { 00:18:27.758 "period_us": 100000, 00:18:27.758 "enable": false 00:18:27.758 } 00:18:27.758 }, 00:18:27.758 { 00:18:27.758 "method": "bdev_wait_for_examine" 00:18:27.758 } 00:18:27.758 ] 00:18:27.758 }, 00:18:27.758 { 00:18:27.758 "subsystem": "nbd", 00:18:27.758 "config": [] 00:18:27.758 } 00:18:27.758 ] 00:18:27.758 }' 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3807797 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3807797 ']' 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3807797 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807797 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807797' 00:18:27.758 killing process with pid 3807797 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3807797 00:18:27.758 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.758 00:18:27.758 Latency(us) 00:18:27.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.758 =================================================================================================================== 00:18:27.758 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.758 [2024-07-15 23:45:02.768094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:27.758 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3807797 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3807430 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3807430 ']' 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3807430 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807430 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807430' 00:18:28.016 killing process with pid 3807430 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3807430 00:18:28.016 [2024-07-15 23:45:03.055350] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:28.016 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3807430 00:18:28.274 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:28.274 23:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.274 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.274 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:28.274 "subsystems": [ 00:18:28.274 { 00:18:28.274 "subsystem": "keyring", 00:18:28.274 "config": [] 00:18:28.274 }, 00:18:28.274 { 00:18:28.274 "subsystem": "iobuf", 00:18:28.274 "config": [ 00:18:28.274 { 00:18:28.274 "method": "iobuf_set_options", 00:18:28.274 "params": { 00:18:28.274 "small_pool_count": 8192, 00:18:28.274 "large_pool_count": 1024, 00:18:28.274 "small_bufsize": 8192, 00:18:28.274 "large_bufsize": 135168 00:18:28.274 } 00:18:28.274 } 00:18:28.274 ] 00:18:28.274 }, 00:18:28.274 { 00:18:28.274 "subsystem": "sock", 00:18:28.274 "config": [ 00:18:28.274 { 00:18:28.274 "method": "sock_set_default_impl", 00:18:28.274 "params": { 00:18:28.274 "impl_name": "posix" 00:18:28.274 } 00:18:28.274 }, 00:18:28.274 { 00:18:28.274 "method": "sock_impl_set_options", 00:18:28.274 "params": { 00:18:28.275 "impl_name": "ssl", 00:18:28.275 "recv_buf_size": 4096, 00:18:28.275 "send_buf_size": 4096, 00:18:28.275 "enable_recv_pipe": true, 00:18:28.275 "enable_quickack": false, 00:18:28.275 "enable_placement_id": 0, 00:18:28.275 "enable_zerocopy_send_server": true, 00:18:28.275 "enable_zerocopy_send_client": false, 00:18:28.275 "zerocopy_threshold": 0, 00:18:28.275 "tls_version": 0, 00:18:28.275 "enable_ktls": false 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "sock_impl_set_options", 00:18:28.275 "params": { 00:18:28.275 "impl_name": "posix", 00:18:28.275 "recv_buf_size": 2097152, 00:18:28.275 "send_buf_size": 2097152, 00:18:28.275 "enable_recv_pipe": true, 00:18:28.275 "enable_quickack": false, 00:18:28.275 "enable_placement_id": 0, 00:18:28.275 "enable_zerocopy_send_server": true, 00:18:28.275 "enable_zerocopy_send_client": false, 00:18:28.275 "zerocopy_threshold": 0, 00:18:28.275 "tls_version": 0, 00:18:28.275 "enable_ktls": false 00:18:28.275 } 00:18:28.275 } 00:18:28.275 ] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "vmd", 00:18:28.275 "config": [] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "accel", 00:18:28.275 "config": [ 00:18:28.275 { 00:18:28.275 "method": "accel_set_options", 00:18:28.275 "params": { 00:18:28.275 "small_cache_size": 128, 00:18:28.275 "large_cache_size": 16, 00:18:28.275 "task_count": 2048, 00:18:28.275 "sequence_count": 2048, 00:18:28.275 "buf_count": 2048 00:18:28.275 } 00:18:28.275 } 00:18:28.275 ] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "bdev", 00:18:28.275 "config": [ 00:18:28.275 { 00:18:28.275 "method": "bdev_set_options", 00:18:28.275 "params": { 00:18:28.275 "bdev_io_pool_size": 65535, 00:18:28.275 "bdev_io_cache_size": 256, 00:18:28.275 "bdev_auto_examine": true, 00:18:28.275 "iobuf_small_cache_size": 128, 00:18:28.275 "iobuf_large_cache_size": 16 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_raid_set_options", 00:18:28.275 "params": { 00:18:28.275 "process_window_size_kb": 1024 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_iscsi_set_options", 00:18:28.275 "params": { 00:18:28.275 "timeout_sec": 30 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_nvme_set_options", 00:18:28.275 "params": { 00:18:28.275 "action_on_timeout": "none", 00:18:28.275 "timeout_us": 0, 00:18:28.275 "timeout_admin_us": 0, 00:18:28.275 "keep_alive_timeout_ms": 10000, 00:18:28.275 "arbitration_burst": 0, 00:18:28.275 "low_priority_weight": 0, 00:18:28.275 "medium_priority_weight": 0, 00:18:28.275 "high_priority_weight": 0, 00:18:28.275 "nvme_adminq_poll_period_us": 10000, 00:18:28.275 "nvme_ioq_poll_period_us": 0, 00:18:28.275 "io_queue_requests": 0, 00:18:28.275 "delay_cmd_submit": true, 00:18:28.275 "transport_retry_count": 4, 00:18:28.275 "bdev_retry_count": 3, 00:18:28.275 "transport_ack_timeout": 0, 00:18:28.275 "ctrlr_loss_timeout_sec": 0, 00:18:28.275 "reconnect_delay_sec": 0, 00:18:28.275 "fast_io_fail_timeout_sec": 0, 00:18:28.275 "disable_auto_failback": false, 00:18:28.275 "generate_uuids": false, 00:18:28.275 "transport_tos": 0, 00:18:28.275 "nvme_error_stat": false, 00:18:28.275 "rdma_srq_size": 0, 00:18:28.275 "io_path_stat": false, 00:18:28.275 "allow_accel_sequence": false, 00:18:28.275 "rdma_max_cq_size": 0, 00:18:28.275 "rdma_cm_event_timeout_ms": 0, 00:18:28.275 "dhchap_digests": [ 00:18:28.275 "sha256", 00:18:28.275 "sha384", 00:18:28.275 "sha512" 00:18:28.275 ], 00:18:28.275 "dhchap_dhgroups": [ 00:18:28.275 "null", 00:18:28.275 "ffdhe2048", 00:18:28.275 "ffdhe3072", 00:18:28.275 "ffdhe4096", 00:18:28.275 "ffdhe6144", 00:18:28.275 "ffdhe8192" 00:18:28.275 ] 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_nvme_set_hotplug", 00:18:28.275 "params": { 00:18:28.275 "period_us": 100000, 00:18:28.275 "enable": false 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_malloc_create", 00:18:28.275 "params": { 00:18:28.275 "name": "malloc0", 00:18:28.275 "num_blocks": 8192, 00:18:28.275 "block_size": 4096, 00:18:28.275 "physical_block_size": 4096, 00:18:28.275 "uuid": "b678a2e9-933e-48bf-b436-3007ac00090f", 00:18:28.275 "optimal_io_boundary": 0 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "bdev_wait_for_examine" 00:18:28.275 } 00:18:28.275 ] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "nbd", 00:18:28.275 "config": [] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "scheduler", 00:18:28.275 "config": [ 00:18:28.275 { 00:18:28.275 "method": "framework_set_scheduler", 00:18:28.275 "params": { 00:18:28.275 "name": "static" 00:18:28.275 } 00:18:28.275 } 00:18:28.275 ] 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "subsystem": "nvmf", 00:18:28.275 "config": [ 00:18:28.275 { 00:18:28.275 "method": "nvmf_set_config", 00:18:28.275 "params": { 00:18:28.275 "discovery_filter": "match_any", 00:18:28.275 "admin_cmd_passthru": { 00:18:28.275 "identify_ctrlr": false 00:18:28.275 } 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_set_max_subsystems", 00:18:28.275 "params": { 00:18:28.275 "max_subsystems": 1024 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_set_crdt", 00:18:28.275 "params": { 00:18:28.275 "crdt1": 0, 00:18:28.275 "crdt2": 0, 00:18:28.275 "crdt3": 0 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_create_transport", 00:18:28.275 "params": { 00:18:28.275 "trtype": "TCP", 00:18:28.275 "max_queue_depth": 128, 00:18:28.275 "max_io_qpairs_per_ctrlr": 127, 00:18:28.275 "in_capsule_data_size": 4096, 00:18:28.275 "max_io_size": 131072, 00:18:28.275 "io_unit_size": 131072, 00:18:28.275 "max_aq_depth": 128, 00:18:28.275 "num_shared_buffers": 511, 00:18:28.275 "buf_cache_size": 4294967295, 00:18:28.275 "dif_insert_or_strip": false, 00:18:28.275 "zcopy": false, 00:18:28.275 "c2h_success": false, 00:18:28.275 "sock_priority": 0, 00:18:28.275 "abort_timeout_sec": 1, 00:18:28.275 "ack_timeout": 0, 00:18:28.275 "data_wr_pool_size": 0 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_create_subsystem", 00:18:28.275 "params": { 00:18:28.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.275 "allow_any_host": false, 00:18:28.275 "serial_number": "SPDK00000000000001", 00:18:28.275 "model_number": "SPDK bdev Controller", 00:18:28.275 "max_namespaces": 10, 00:18:28.275 "min_cntlid": 1, 00:18:28.275 "max_cntlid": 65519, 00:18:28.275 "ana_reporting": false 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_subsystem_add_host", 00:18:28.275 "params": { 00:18:28.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.275 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.275 "psk": "/tmp/tmp.JMaqrwsgzY" 00:18:28.275 } 00:18:28.275 }, 00:18:28.275 { 00:18:28.275 "method": "nvmf_subsystem_add_ns", 00:18:28.276 "params": { 00:18:28.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.276 "namespace": { 00:18:28.276 "nsid": 1, 00:18:28.276 "bdev_name": "malloc0", 00:18:28.276 "nguid": "B678A2E9933E48BFB4363007AC00090F", 00:18:28.276 "uuid": "b678a2e9-933e-48bf-b436-3007ac00090f", 00:18:28.276 "no_auto_visible": false 00:18:28.276 } 00:18:28.276 } 00:18:28.276 }, 00:18:28.276 { 00:18:28.276 "method": "nvmf_subsystem_add_listener", 00:18:28.276 "params": { 00:18:28.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.276 "listen_address": { 00:18:28.276 "trtype": "TCP", 00:18:28.276 "adrfam": "IPv4", 00:18:28.276 "traddr": "10.0.0.2", 00:18:28.276 "trsvcid": "4420" 00:18:28.276 }, 00:18:28.276 "secure_channel": true 00:18:28.276 } 00:18:28.276 } 00:18:28.276 ] 00:18:28.276 } 00:18:28.276 ] 00:18:28.276 }' 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3807988 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3807988 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3807988 ']' 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.276 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.276 [2024-07-15 23:45:03.383119] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:28.276 [2024-07-15 23:45:03.383210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.534 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.534 [2024-07-15 23:45:03.447626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.534 [2024-07-15 23:45:03.556128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.534 [2024-07-15 23:45:03.556182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.534 [2024-07-15 23:45:03.556198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.534 [2024-07-15 23:45:03.556210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.534 [2024-07-15 23:45:03.556221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.534 [2024-07-15 23:45:03.556321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.791 [2024-07-15 23:45:03.790250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.791 [2024-07-15 23:45:03.806209] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:28.791 [2024-07-15 23:45:03.822290] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.791 [2024-07-15 23:45:03.833116] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3808136 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3808136 /var/tmp/bdevperf.sock 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3808136 ']' 00:18:29.364 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.365 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:29.365 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.365 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.365 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:29.365 "subsystems": [ 00:18:29.365 { 00:18:29.365 "subsystem": "keyring", 00:18:29.365 "config": [] 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "subsystem": "iobuf", 00:18:29.365 "config": [ 00:18:29.365 { 00:18:29.365 "method": "iobuf_set_options", 00:18:29.365 "params": { 00:18:29.365 "small_pool_count": 8192, 00:18:29.365 "large_pool_count": 1024, 00:18:29.365 "small_bufsize": 8192, 00:18:29.365 "large_bufsize": 135168 00:18:29.365 } 00:18:29.365 } 00:18:29.365 ] 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "subsystem": "sock", 00:18:29.365 "config": [ 00:18:29.365 { 00:18:29.365 "method": "sock_set_default_impl", 00:18:29.365 "params": { 00:18:29.365 "impl_name": "posix" 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "sock_impl_set_options", 00:18:29.365 "params": { 00:18:29.365 "impl_name": "ssl", 00:18:29.365 "recv_buf_size": 4096, 00:18:29.365 "send_buf_size": 4096, 00:18:29.365 "enable_recv_pipe": true, 00:18:29.365 "enable_quickack": false, 00:18:29.365 "enable_placement_id": 0, 00:18:29.365 "enable_zerocopy_send_server": true, 00:18:29.365 "enable_zerocopy_send_client": false, 00:18:29.365 "zerocopy_threshold": 0, 00:18:29.365 "tls_version": 0, 00:18:29.365 "enable_ktls": false 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "sock_impl_set_options", 00:18:29.365 "params": { 00:18:29.365 "impl_name": "posix", 00:18:29.365 "recv_buf_size": 2097152, 00:18:29.365 "send_buf_size": 2097152, 00:18:29.365 "enable_recv_pipe": true, 00:18:29.365 "enable_quickack": false, 00:18:29.365 "enable_placement_id": 0, 00:18:29.365 "enable_zerocopy_send_server": true, 00:18:29.365 "enable_zerocopy_send_client": false, 00:18:29.365 "zerocopy_threshold": 0, 00:18:29.365 "tls_version": 0, 00:18:29.365 "enable_ktls": false 00:18:29.365 } 00:18:29.365 } 00:18:29.365 ] 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "subsystem": "vmd", 00:18:29.365 "config": [] 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "subsystem": "accel", 00:18:29.365 "config": [ 00:18:29.365 { 00:18:29.365 "method": "accel_set_options", 00:18:29.365 "params": { 00:18:29.365 "small_cache_size": 128, 00:18:29.365 "large_cache_size": 16, 00:18:29.365 "task_count": 2048, 00:18:29.365 "sequence_count": 2048, 00:18:29.365 "buf_count": 2048 00:18:29.365 } 00:18:29.365 } 00:18:29.365 ] 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "subsystem": "bdev", 00:18:29.365 "config": [ 00:18:29.365 { 00:18:29.365 "method": "bdev_set_options", 00:18:29.365 "params": { 00:18:29.365 "bdev_io_pool_size": 65535, 00:18:29.365 "bdev_io_cache_size": 256, 00:18:29.365 "bdev_auto_examine": true, 00:18:29.365 "iobuf_small_cache_size": 128, 00:18:29.365 "iobuf_large_cache_size": 16 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "bdev_raid_set_options", 00:18:29.365 "params": { 00:18:29.365 "process_window_size_kb": 1024 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "bdev_iscsi_set_options", 00:18:29.365 "params": { 00:18:29.365 "timeout_sec": 30 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "bdev_nvme_set_options", 00:18:29.365 "params": { 00:18:29.365 "action_on_timeout": "none", 00:18:29.365 "timeout_us": 0, 00:18:29.365 "timeout_admin_us": 0, 00:18:29.365 "keep_alive_timeout_ms": 10000, 00:18:29.365 "arbitration_burst": 0, 00:18:29.365 "low_priority_weight": 0, 00:18:29.365 "medium_priority_weight": 0, 00:18:29.365 "high_priority_weight": 0, 00:18:29.365 "nvme_adminq_poll_period_us": 10000, 00:18:29.365 "nvme_ioq_poll_period_us": 0, 00:18:29.365 "io_queue_requests": 512, 00:18:29.365 "delay_cmd_submit": true, 00:18:29.365 "transport_retry_count": 4, 00:18:29.365 "bdev_retry_count": 3, 00:18:29.365 "transport_ack_timeout": 0, 00:18:29.365 "ctrlr_loss_timeout_sec": 0, 00:18:29.365 "reconnect_delay_sec": 0, 00:18:29.365 "fast_io_fail_timeout_sec": 0, 00:18:29.365 "disable_auto_failback": false, 00:18:29.365 "generate_uuids": false, 00:18:29.365 "transport_tos": 0, 00:18:29.365 "nvme_error_stat": false, 00:18:29.365 "rdma_srq_size": 0, 00:18:29.365 "io_path_stat": false, 00:18:29.365 "allow_accel_sequence": false, 00:18:29.365 "rdma_max_cq_size": 0, 00:18:29.365 "rdma_cm_event_timeout_ms": 0, 00:18:29.365 "dhchap_digests": [ 00:18:29.365 "sha256", 00:18:29.365 "sha384", 00:18:29.365 "sha512" 00:18:29.365 ], 00:18:29.365 "dhchap_dhgroups": [ 00:18:29.365 "null", 00:18:29.365 "ffdhe2048", 00:18:29.365 "ffdhe3072", 00:18:29.365 "ffdhe4096", 00:18:29.365 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.365 he6144", 00:18:29.365 "ffdhe8192" 00:18:29.365 ] 00:18:29.365 } 00:18:29.365 }, 00:18:29.365 { 00:18:29.365 "method": "bdev_nvme_attach_controller", 00:18:29.365 "params": { 00:18:29.365 "name": "TLSTEST", 00:18:29.365 "trtype": "TCP", 00:18:29.365 "adrfam": "IPv4", 00:18:29.365 "traddr": "10.0.0.2", 00:18:29.365 "trsvcid": "4420", 00:18:29.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.365 "prchk_reftag": false, 00:18:29.365 "prchk_guard": false, 00:18:29.365 "ctrlr_loss_timeout_sec": 0, 00:18:29.365 "reconnect_delay_sec": 0, 00:18:29.365 "fast_io_fail_timeout_sec": 0, 00:18:29.365 "psk": "/tmp/tmp.JMaqrwsgzY", 00:18:29.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.366 "hdgst": false, 00:18:29.366 "ddgst": false 00:18:29.366 } 00:18:29.366 }, 00:18:29.366 { 00:18:29.366 "method": "bdev_nvme_set_hotplug", 00:18:29.366 "params": { 00:18:29.366 "period_us": 100000, 00:18:29.366 "enable": false 00:18:29.366 } 00:18:29.366 }, 00:18:29.366 { 00:18:29.366 "method": "bdev_wait_for_examine" 00:18:29.366 } 00:18:29.366 ] 00:18:29.366 }, 00:18:29.366 { 00:18:29.366 "subsystem": "nbd", 00:18:29.366 "config": [] 00:18:29.366 } 00:18:29.366 ] 00:18:29.366 }' 00:18:29.366 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.366 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.366 [2024-07-15 23:45:04.406056] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:29.366 [2024-07-15 23:45:04.406144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808136 ] 00:18:29.366 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.366 [2024-07-15 23:45:04.464132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.623 [2024-07-15 23:45:04.569676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.623 [2024-07-15 23:45:04.733129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.623 [2024-07-15 23:45:04.733246] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:30.555 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.555 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.555 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.555 Running I/O for 10 seconds... 00:18:40.514 00:18:40.514 Latency(us) 00:18:40.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.514 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.514 Verification LBA range: start 0x0 length 0x2000 00:18:40.514 TLSTESTn1 : 10.03 3051.83 11.92 0.00 0.00 41852.85 6165.24 72623.60 00:18:40.514 =================================================================================================================== 00:18:40.514 Total : 3051.83 11.92 0.00 0.00 41852.85 6165.24 72623.60 00:18:40.514 0 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3808136 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3808136 ']' 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3808136 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808136 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808136' 00:18:40.514 killing process with pid 3808136 00:18:40.514 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3808136 00:18:40.515 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.515 00:18:40.515 Latency(us) 00:18:40.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.515 =================================================================================================================== 00:18:40.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.515 [2024-07-15 23:45:15.582139] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.515 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3808136 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3807988 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3807988 ']' 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3807988 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807988 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807988' 00:18:40.771 killing process with pid 3807988 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3807988 00:18:40.771 [2024-07-15 23:45:15.870129] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.771 23:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3807988 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3810001 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3810001 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810001 ']' 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.028 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.285 [2024-07-15 23:45:16.197949] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:41.285 [2024-07-15 23:45:16.198041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.285 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.285 [2024-07-15 23:45:16.263551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.285 [2024-07-15 23:45:16.362333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.285 [2024-07-15 23:45:16.362390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.285 [2024-07-15 23:45:16.362403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.285 [2024-07-15 23:45:16.362414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.285 [2024-07-15 23:45:16.362423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.285 [2024-07-15 23:45:16.362448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JMaqrwsgzY 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JMaqrwsgzY 00:18:41.542 23:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.799 [2024-07-15 23:45:16.771674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.799 23:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.055 23:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:42.311 [2024-07-15 23:45:17.265053] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.311 [2024-07-15 23:45:17.265319] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.311 23:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.569 malloc0 00:18:42.569 23:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:42.826 23:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JMaqrwsgzY 00:18:43.083 [2024-07-15 23:45:17.996971] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3810257 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3810257 /var/tmp/bdevperf.sock 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810257 ']' 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.083 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.083 [2024-07-15 23:45:18.058795] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:43.083 [2024-07-15 23:45:18.058882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810257 ] 00:18:43.083 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.083 [2024-07-15 23:45:18.117713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.340 [2024-07-15 23:45:18.231898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.340 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.341 23:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.341 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JMaqrwsgzY 00:18:43.597 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:43.854 [2024-07-15 23:45:18.828134] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.854 nvme0n1 00:18:43.854 23:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.111 Running I/O for 1 seconds... 00:18:45.041 00:18:45.041 Latency(us) 00:18:45.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.041 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:45.041 Verification LBA range: start 0x0 length 0x2000 00:18:45.041 nvme0n1 : 1.02 3296.22 12.88 0.00 0.00 38324.30 9175.04 39224.51 00:18:45.041 =================================================================================================================== 00:18:45.041 Total : 3296.22 12.88 0.00 0.00 38324.30 9175.04 39224.51 00:18:45.041 0 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3810257 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810257 ']' 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810257 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810257 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810257' 00:18:45.041 killing process with pid 3810257 00:18:45.041 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810257 00:18:45.041 Received shutdown signal, test time was about 1.000000 seconds 00:18:45.041 00:18:45.041 Latency(us) 00:18:45.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.042 =================================================================================================================== 00:18:45.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.042 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810257 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3810001 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810001 ']' 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810001 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810001 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810001' 00:18:45.299 killing process with pid 3810001 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810001 00:18:45.299 [2024-07-15 23:45:20.351859] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.299 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810001 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3810537 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3810537 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810537 ']' 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.558 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.558 [2024-07-15 23:45:20.647639] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:45.558 [2024-07-15 23:45:20.647727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.850 [2024-07-15 23:45:20.717028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.851 [2024-07-15 23:45:20.823123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.851 [2024-07-15 23:45:20.823188] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.851 [2024-07-15 23:45:20.823202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.851 [2024-07-15 23:45:20.823214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.851 [2024-07-15 23:45:20.823223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.851 [2024-07-15 23:45:20.823280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.851 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.851 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:45.851 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.851 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.851 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.109 23:45:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.109 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:46.109 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.109 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.109 [2024-07-15 23:45:20.962434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.109 malloc0 00:18:46.109 [2024-07-15 23:45:20.994362] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.109 [2024-07-15 23:45:20.994595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3810681 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3810681 /var/tmp/bdevperf.sock 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810681 ']' 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.109 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.109 [2024-07-15 23:45:21.062141] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:46.109 [2024-07-15 23:45:21.062205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810681 ] 00:18:46.109 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.109 [2024-07-15 23:45:21.118627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.109 [2024-07-15 23:45:21.225000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.366 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.366 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.366 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JMaqrwsgzY 00:18:46.624 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:46.882 [2024-07-15 23:45:21.819376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.882 nvme0n1 00:18:46.882 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.140 Running I/O for 1 seconds... 00:18:48.072 00:18:48.072 Latency(us) 00:18:48.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.072 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:48.072 Verification LBA range: start 0x0 length 0x2000 00:18:48.072 nvme0n1 : 1.03 3442.80 13.45 0.00 0.00 36669.93 6213.78 32039.82 00:18:48.072 =================================================================================================================== 00:18:48.072 Total : 3442.80 13.45 0.00 0.00 36669.93 6213.78 32039.82 00:18:48.072 0 00:18:48.072 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:48.072 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.072 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.072 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.072 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:48.072 "subsystems": [ 00:18:48.072 { 00:18:48.072 "subsystem": "keyring", 00:18:48.072 "config": [ 00:18:48.072 { 00:18:48.072 "method": "keyring_file_add_key", 00:18:48.072 "params": { 00:18:48.072 "name": "key0", 00:18:48.072 "path": "/tmp/tmp.JMaqrwsgzY" 00:18:48.072 } 00:18:48.072 } 00:18:48.072 ] 00:18:48.072 }, 00:18:48.072 { 00:18:48.072 "subsystem": "iobuf", 00:18:48.072 "config": [ 00:18:48.072 { 00:18:48.072 "method": "iobuf_set_options", 00:18:48.072 "params": { 00:18:48.072 "small_pool_count": 8192, 00:18:48.072 "large_pool_count": 1024, 00:18:48.072 "small_bufsize": 8192, 00:18:48.072 "large_bufsize": 135168 00:18:48.072 } 00:18:48.072 } 00:18:48.072 ] 00:18:48.072 }, 00:18:48.072 { 00:18:48.072 "subsystem": "sock", 00:18:48.072 "config": [ 00:18:48.072 { 00:18:48.072 "method": "sock_set_default_impl", 00:18:48.072 "params": { 00:18:48.072 "impl_name": "posix" 00:18:48.072 } 00:18:48.072 }, 00:18:48.072 { 00:18:48.072 "method": "sock_impl_set_options", 00:18:48.072 "params": { 00:18:48.072 "impl_name": "ssl", 00:18:48.072 "recv_buf_size": 4096, 00:18:48.072 "send_buf_size": 4096, 00:18:48.072 "enable_recv_pipe": true, 00:18:48.072 "enable_quickack": false, 00:18:48.072 "enable_placement_id": 0, 00:18:48.072 "enable_zerocopy_send_server": true, 00:18:48.072 "enable_zerocopy_send_client": false, 00:18:48.072 "zerocopy_threshold": 0, 00:18:48.072 "tls_version": 0, 00:18:48.072 "enable_ktls": false 00:18:48.072 } 00:18:48.072 }, 00:18:48.072 { 00:18:48.072 "method": "sock_impl_set_options", 00:18:48.072 "params": { 00:18:48.072 "impl_name": "posix", 00:18:48.072 "recv_buf_size": 2097152, 00:18:48.072 "send_buf_size": 2097152, 00:18:48.072 "enable_recv_pipe": true, 00:18:48.072 "enable_quickack": false, 00:18:48.072 "enable_placement_id": 0, 00:18:48.072 "enable_zerocopy_send_server": true, 00:18:48.072 "enable_zerocopy_send_client": false, 00:18:48.072 "zerocopy_threshold": 0, 00:18:48.073 "tls_version": 0, 00:18:48.073 "enable_ktls": false 00:18:48.073 } 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "vmd", 00:18:48.073 "config": [] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "accel", 00:18:48.073 "config": [ 00:18:48.073 { 00:18:48.073 "method": "accel_set_options", 00:18:48.073 "params": { 00:18:48.073 "small_cache_size": 128, 00:18:48.073 "large_cache_size": 16, 00:18:48.073 "task_count": 2048, 00:18:48.073 "sequence_count": 2048, 00:18:48.073 "buf_count": 2048 00:18:48.073 } 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "bdev", 00:18:48.073 "config": [ 00:18:48.073 { 00:18:48.073 "method": "bdev_set_options", 00:18:48.073 "params": { 00:18:48.073 "bdev_io_pool_size": 65535, 00:18:48.073 "bdev_io_cache_size": 256, 00:18:48.073 "bdev_auto_examine": true, 00:18:48.073 "iobuf_small_cache_size": 128, 00:18:48.073 "iobuf_large_cache_size": 16 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_raid_set_options", 00:18:48.073 "params": { 00:18:48.073 "process_window_size_kb": 1024 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_iscsi_set_options", 00:18:48.073 "params": { 00:18:48.073 "timeout_sec": 30 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_nvme_set_options", 00:18:48.073 "params": { 00:18:48.073 "action_on_timeout": "none", 00:18:48.073 "timeout_us": 0, 00:18:48.073 "timeout_admin_us": 0, 00:18:48.073 "keep_alive_timeout_ms": 10000, 00:18:48.073 "arbitration_burst": 0, 00:18:48.073 "low_priority_weight": 0, 00:18:48.073 "medium_priority_weight": 0, 00:18:48.073 "high_priority_weight": 0, 00:18:48.073 "nvme_adminq_poll_period_us": 10000, 00:18:48.073 "nvme_ioq_poll_period_us": 0, 00:18:48.073 "io_queue_requests": 0, 00:18:48.073 "delay_cmd_submit": true, 00:18:48.073 "transport_retry_count": 4, 00:18:48.073 "bdev_retry_count": 3, 00:18:48.073 "transport_ack_timeout": 0, 00:18:48.073 "ctrlr_loss_timeout_sec": 0, 00:18:48.073 "reconnect_delay_sec": 0, 00:18:48.073 "fast_io_fail_timeout_sec": 0, 00:18:48.073 "disable_auto_failback": false, 00:18:48.073 "generate_uuids": false, 00:18:48.073 "transport_tos": 0, 00:18:48.073 "nvme_error_stat": false, 00:18:48.073 "rdma_srq_size": 0, 00:18:48.073 "io_path_stat": false, 00:18:48.073 "allow_accel_sequence": false, 00:18:48.073 "rdma_max_cq_size": 0, 00:18:48.073 "rdma_cm_event_timeout_ms": 0, 00:18:48.073 "dhchap_digests": [ 00:18:48.073 "sha256", 00:18:48.073 "sha384", 00:18:48.073 "sha512" 00:18:48.073 ], 00:18:48.073 "dhchap_dhgroups": [ 00:18:48.073 "null", 00:18:48.073 "ffdhe2048", 00:18:48.073 "ffdhe3072", 00:18:48.073 "ffdhe4096", 00:18:48.073 "ffdhe6144", 00:18:48.073 "ffdhe8192" 00:18:48.073 ] 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_nvme_set_hotplug", 00:18:48.073 "params": { 00:18:48.073 "period_us": 100000, 00:18:48.073 "enable": false 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_malloc_create", 00:18:48.073 "params": { 00:18:48.073 "name": "malloc0", 00:18:48.073 "num_blocks": 8192, 00:18:48.073 "block_size": 4096, 00:18:48.073 "physical_block_size": 4096, 00:18:48.073 "uuid": "5fd512a5-3406-41f8-acbe-b720cfe01d06", 00:18:48.073 "optimal_io_boundary": 0 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "bdev_wait_for_examine" 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "nbd", 00:18:48.073 "config": [] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "scheduler", 00:18:48.073 "config": [ 00:18:48.073 { 00:18:48.073 "method": "framework_set_scheduler", 00:18:48.073 "params": { 00:18:48.073 "name": "static" 00:18:48.073 } 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "subsystem": "nvmf", 00:18:48.073 "config": [ 00:18:48.073 { 00:18:48.073 "method": "nvmf_set_config", 00:18:48.073 "params": { 00:18:48.073 "discovery_filter": "match_any", 00:18:48.073 "admin_cmd_passthru": { 00:18:48.073 "identify_ctrlr": false 00:18:48.073 } 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_set_max_subsystems", 00:18:48.073 "params": { 00:18:48.073 "max_subsystems": 1024 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_set_crdt", 00:18:48.073 "params": { 00:18:48.073 "crdt1": 0, 00:18:48.073 "crdt2": 0, 00:18:48.073 "crdt3": 0 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_create_transport", 00:18:48.073 "params": { 00:18:48.073 "trtype": "TCP", 00:18:48.073 "max_queue_depth": 128, 00:18:48.073 "max_io_qpairs_per_ctrlr": 127, 00:18:48.073 "in_capsule_data_size": 4096, 00:18:48.073 "max_io_size": 131072, 00:18:48.073 "io_unit_size": 131072, 00:18:48.073 "max_aq_depth": 128, 00:18:48.073 "num_shared_buffers": 511, 00:18:48.073 "buf_cache_size": 4294967295, 00:18:48.073 "dif_insert_or_strip": false, 00:18:48.073 "zcopy": false, 00:18:48.073 "c2h_success": false, 00:18:48.073 "sock_priority": 0, 00:18:48.073 "abort_timeout_sec": 1, 00:18:48.073 "ack_timeout": 0, 00:18:48.073 "data_wr_pool_size": 0 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_create_subsystem", 00:18:48.073 "params": { 00:18:48.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.073 "allow_any_host": false, 00:18:48.073 "serial_number": "00000000000000000000", 00:18:48.073 "model_number": "SPDK bdev Controller", 00:18:48.073 "max_namespaces": 32, 00:18:48.073 "min_cntlid": 1, 00:18:48.073 "max_cntlid": 65519, 00:18:48.073 "ana_reporting": false 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_subsystem_add_host", 00:18:48.073 "params": { 00:18:48.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.073 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.073 "psk": "key0" 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_subsystem_add_ns", 00:18:48.073 "params": { 00:18:48.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.073 "namespace": { 00:18:48.073 "nsid": 1, 00:18:48.073 "bdev_name": "malloc0", 00:18:48.073 "nguid": "5FD512A5340641F8ACBEB720CFE01D06", 00:18:48.073 "uuid": "5fd512a5-3406-41f8-acbe-b720cfe01d06", 00:18:48.073 "no_auto_visible": false 00:18:48.073 } 00:18:48.073 } 00:18:48.073 }, 00:18:48.073 { 00:18:48.073 "method": "nvmf_subsystem_add_listener", 00:18:48.073 "params": { 00:18:48.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.073 "listen_address": { 00:18:48.073 "trtype": "TCP", 00:18:48.073 "adrfam": "IPv4", 00:18:48.073 "traddr": "10.0.0.2", 00:18:48.073 "trsvcid": "4420" 00:18:48.073 }, 00:18:48.073 "secure_channel": false, 00:18:48.073 "sock_impl": "ssl" 00:18:48.073 } 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 } 00:18:48.073 ] 00:18:48.073 }' 00:18:48.073 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:48.638 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:48.638 "subsystems": [ 00:18:48.638 { 00:18:48.638 "subsystem": "keyring", 00:18:48.638 "config": [ 00:18:48.638 { 00:18:48.638 "method": "keyring_file_add_key", 00:18:48.638 "params": { 00:18:48.638 "name": "key0", 00:18:48.638 "path": "/tmp/tmp.JMaqrwsgzY" 00:18:48.638 } 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "iobuf", 00:18:48.638 "config": [ 00:18:48.638 { 00:18:48.638 "method": "iobuf_set_options", 00:18:48.638 "params": { 00:18:48.638 "small_pool_count": 8192, 00:18:48.638 "large_pool_count": 1024, 00:18:48.638 "small_bufsize": 8192, 00:18:48.638 "large_bufsize": 135168 00:18:48.638 } 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "sock", 00:18:48.638 "config": [ 00:18:48.638 { 00:18:48.638 "method": "sock_set_default_impl", 00:18:48.638 "params": { 00:18:48.638 "impl_name": "posix" 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "sock_impl_set_options", 00:18:48.638 "params": { 00:18:48.638 "impl_name": "ssl", 00:18:48.638 "recv_buf_size": 4096, 00:18:48.638 "send_buf_size": 4096, 00:18:48.638 "enable_recv_pipe": true, 00:18:48.638 "enable_quickack": false, 00:18:48.638 "enable_placement_id": 0, 00:18:48.638 "enable_zerocopy_send_server": true, 00:18:48.638 "enable_zerocopy_send_client": false, 00:18:48.638 "zerocopy_threshold": 0, 00:18:48.638 "tls_version": 0, 00:18:48.638 "enable_ktls": false 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "sock_impl_set_options", 00:18:48.638 "params": { 00:18:48.638 "impl_name": "posix", 00:18:48.638 "recv_buf_size": 2097152, 00:18:48.638 "send_buf_size": 2097152, 00:18:48.638 "enable_recv_pipe": true, 00:18:48.638 "enable_quickack": false, 00:18:48.638 "enable_placement_id": 0, 00:18:48.638 "enable_zerocopy_send_server": true, 00:18:48.638 "enable_zerocopy_send_client": false, 00:18:48.638 "zerocopy_threshold": 0, 00:18:48.638 "tls_version": 0, 00:18:48.638 "enable_ktls": false 00:18:48.638 } 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "vmd", 00:18:48.638 "config": [] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "accel", 00:18:48.638 "config": [ 00:18:48.638 { 00:18:48.638 "method": "accel_set_options", 00:18:48.638 "params": { 00:18:48.638 "small_cache_size": 128, 00:18:48.638 "large_cache_size": 16, 00:18:48.638 "task_count": 2048, 00:18:48.638 "sequence_count": 2048, 00:18:48.638 "buf_count": 2048 00:18:48.638 } 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "bdev", 00:18:48.638 "config": [ 00:18:48.638 { 00:18:48.638 "method": "bdev_set_options", 00:18:48.638 "params": { 00:18:48.638 "bdev_io_pool_size": 65535, 00:18:48.638 "bdev_io_cache_size": 256, 00:18:48.638 "bdev_auto_examine": true, 00:18:48.638 "iobuf_small_cache_size": 128, 00:18:48.638 "iobuf_large_cache_size": 16 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_raid_set_options", 00:18:48.638 "params": { 00:18:48.638 "process_window_size_kb": 1024 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_iscsi_set_options", 00:18:48.638 "params": { 00:18:48.638 "timeout_sec": 30 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_nvme_set_options", 00:18:48.638 "params": { 00:18:48.638 "action_on_timeout": "none", 00:18:48.638 "timeout_us": 0, 00:18:48.638 "timeout_admin_us": 0, 00:18:48.638 "keep_alive_timeout_ms": 10000, 00:18:48.638 "arbitration_burst": 0, 00:18:48.638 "low_priority_weight": 0, 00:18:48.638 "medium_priority_weight": 0, 00:18:48.638 "high_priority_weight": 0, 00:18:48.638 "nvme_adminq_poll_period_us": 10000, 00:18:48.638 "nvme_ioq_poll_period_us": 0, 00:18:48.638 "io_queue_requests": 512, 00:18:48.638 "delay_cmd_submit": true, 00:18:48.638 "transport_retry_count": 4, 00:18:48.638 "bdev_retry_count": 3, 00:18:48.638 "transport_ack_timeout": 0, 00:18:48.638 "ctrlr_loss_timeout_sec": 0, 00:18:48.638 "reconnect_delay_sec": 0, 00:18:48.638 "fast_io_fail_timeout_sec": 0, 00:18:48.638 "disable_auto_failback": false, 00:18:48.638 "generate_uuids": false, 00:18:48.638 "transport_tos": 0, 00:18:48.638 "nvme_error_stat": false, 00:18:48.638 "rdma_srq_size": 0, 00:18:48.638 "io_path_stat": false, 00:18:48.638 "allow_accel_sequence": false, 00:18:48.638 "rdma_max_cq_size": 0, 00:18:48.638 "rdma_cm_event_timeout_ms": 0, 00:18:48.638 "dhchap_digests": [ 00:18:48.638 "sha256", 00:18:48.638 "sha384", 00:18:48.638 "sha512" 00:18:48.638 ], 00:18:48.638 "dhchap_dhgroups": [ 00:18:48.638 "null", 00:18:48.638 "ffdhe2048", 00:18:48.638 "ffdhe3072", 00:18:48.638 "ffdhe4096", 00:18:48.638 "ffdhe6144", 00:18:48.638 "ffdhe8192" 00:18:48.638 ] 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_nvme_attach_controller", 00:18:48.638 "params": { 00:18:48.638 "name": "nvme0", 00:18:48.638 "trtype": "TCP", 00:18:48.638 "adrfam": "IPv4", 00:18:48.638 "traddr": "10.0.0.2", 00:18:48.638 "trsvcid": "4420", 00:18:48.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.638 "prchk_reftag": false, 00:18:48.638 "prchk_guard": false, 00:18:48.638 "ctrlr_loss_timeout_sec": 0, 00:18:48.638 "reconnect_delay_sec": 0, 00:18:48.638 "fast_io_fail_timeout_sec": 0, 00:18:48.638 "psk": "key0", 00:18:48.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.638 "hdgst": false, 00:18:48.638 "ddgst": false 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_nvme_set_hotplug", 00:18:48.638 "params": { 00:18:48.638 "period_us": 100000, 00:18:48.638 "enable": false 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_enable_histogram", 00:18:48.638 "params": { 00:18:48.638 "name": "nvme0n1", 00:18:48.638 "enable": true 00:18:48.638 } 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "method": "bdev_wait_for_examine" 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }, 00:18:48.638 { 00:18:48.638 "subsystem": "nbd", 00:18:48.638 "config": [] 00:18:48.638 } 00:18:48.638 ] 00:18:48.638 }' 00:18:48.638 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3810681 00:18:48.638 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810681 ']' 00:18:48.638 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810681 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810681 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810681' 00:18:48.639 killing process with pid 3810681 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810681 00:18:48.639 Received shutdown signal, test time was about 1.000000 seconds 00:18:48.639 00:18:48.639 Latency(us) 00:18:48.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.639 =================================================================================================================== 00:18:48.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.639 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810681 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3810537 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810537 ']' 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810537 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810537 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810537' 00:18:48.897 killing process with pid 3810537 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810537 00:18:48.897 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810537 00:18:49.155 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:49.155 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.155 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:49.155 "subsystems": [ 00:18:49.155 { 00:18:49.155 "subsystem": "keyring", 00:18:49.155 "config": [ 00:18:49.155 { 00:18:49.155 "method": "keyring_file_add_key", 00:18:49.155 "params": { 00:18:49.155 "name": "key0", 00:18:49.155 "path": "/tmp/tmp.JMaqrwsgzY" 00:18:49.155 } 00:18:49.155 } 00:18:49.155 ] 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "subsystem": "iobuf", 00:18:49.155 "config": [ 00:18:49.155 { 00:18:49.155 "method": "iobuf_set_options", 00:18:49.155 "params": { 00:18:49.155 "small_pool_count": 8192, 00:18:49.155 "large_pool_count": 1024, 00:18:49.155 "small_bufsize": 8192, 00:18:49.155 "large_bufsize": 135168 00:18:49.155 } 00:18:49.155 } 00:18:49.155 ] 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "subsystem": "sock", 00:18:49.155 "config": [ 00:18:49.155 { 00:18:49.155 "method": "sock_set_default_impl", 00:18:49.155 "params": { 00:18:49.155 "impl_name": "posix" 00:18:49.155 } 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "method": "sock_impl_set_options", 00:18:49.155 "params": { 00:18:49.155 "impl_name": "ssl", 00:18:49.155 "recv_buf_size": 4096, 00:18:49.155 "send_buf_size": 4096, 00:18:49.155 "enable_recv_pipe": true, 00:18:49.155 "enable_quickack": false, 00:18:49.155 "enable_placement_id": 0, 00:18:49.155 "enable_zerocopy_send_server": true, 00:18:49.155 "enable_zerocopy_send_client": false, 00:18:49.155 "zerocopy_threshold": 0, 00:18:49.155 "tls_version": 0, 00:18:49.155 "enable_ktls": false 00:18:49.155 } 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "method": "sock_impl_set_options", 00:18:49.155 "params": { 00:18:49.155 "impl_name": "posix", 00:18:49.155 "recv_buf_size": 2097152, 00:18:49.155 "send_buf_size": 2097152, 00:18:49.155 "enable_recv_pipe": true, 00:18:49.155 "enable_quickack": false, 00:18:49.155 "enable_placement_id": 0, 00:18:49.155 "enable_zerocopy_send_server": true, 00:18:49.155 "enable_zerocopy_send_client": false, 00:18:49.155 "zerocopy_threshold": 0, 00:18:49.155 "tls_version": 0, 00:18:49.155 "enable_ktls": false 00:18:49.155 } 00:18:49.155 } 00:18:49.155 ] 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "subsystem": "vmd", 00:18:49.155 "config": [] 00:18:49.155 }, 00:18:49.155 { 00:18:49.155 "subsystem": "accel", 00:18:49.155 "config": [ 00:18:49.155 { 00:18:49.156 "method": "accel_set_options", 00:18:49.156 "params": { 00:18:49.156 "small_cache_size": 128, 00:18:49.156 "large_cache_size": 16, 00:18:49.156 "task_count": 2048, 00:18:49.156 "sequence_count": 2048, 00:18:49.156 "buf_count": 2048 00:18:49.156 } 00:18:49.156 } 00:18:49.156 ] 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "subsystem": "bdev", 00:18:49.156 "config": [ 00:18:49.156 { 00:18:49.156 "method": "bdev_set_options", 00:18:49.156 "params": { 00:18:49.156 "bdev_io_pool_size": 65535, 00:18:49.156 "bdev_io_cache_size": 256, 00:18:49.156 "bdev_auto_examine": true, 00:18:49.156 "iobuf_small_cache_size": 128, 00:18:49.156 "iobuf_large_cache_size": 16 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_raid_set_options", 00:18:49.156 "params": { 00:18:49.156 "process_window_size_kb": 1024 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_iscsi_set_options", 00:18:49.156 "params": { 00:18:49.156 "timeout_sec": 30 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_nvme_set_options", 00:18:49.156 "params": { 00:18:49.156 "action_on_timeout": "none", 00:18:49.156 "timeout_us": 0, 00:18:49.156 "timeout_admin_us": 0, 00:18:49.156 "keep_alive_timeout_ms": 10000, 00:18:49.156 "arbitration_burst": 0, 00:18:49.156 "low_priority_weight": 0, 00:18:49.156 "medium_priority_weight": 0, 00:18:49.156 "high_priority_weight": 0, 00:18:49.156 "nvme_adminq_poll_period_us": 10000, 00:18:49.156 "nvme_ioq_poll_period_us": 0, 00:18:49.156 "io_queue_requests": 0, 00:18:49.156 "delay_cmd_submit": true, 00:18:49.156 "transport_retry_count": 4, 00:18:49.156 "bdev_retry_count": 3, 00:18:49.156 "transport_ack_timeout": 0, 00:18:49.156 "ctrlr_loss_timeout_sec": 0, 00:18:49.156 "reconnect_delay_sec": 0, 00:18:49.156 "fast_io_fail_timeout_sec": 0, 00:18:49.156 "disable_auto_failback": false, 00:18:49.156 "generate_uuids": false, 00:18:49.156 "transport_tos": 0, 00:18:49.156 "nvme_error_stat": false, 00:18:49.156 "rdma_srq_size": 0, 00:18:49.156 "io_path_stat": false, 00:18:49.156 "allow_accel_sequence": false, 00:18:49.156 "rdma_max_cq_size": 0, 00:18:49.156 "rdma_cm_event_timeout_ms": 0, 00:18:49.156 "dhchap_digests": [ 00:18:49.156 "sha256", 00:18:49.156 "sha384", 00:18:49.156 "sha512" 00:18:49.156 ], 00:18:49.156 "dhchap_dhgroups": [ 00:18:49.156 "null", 00:18:49.156 "ffdhe2048", 00:18:49.156 "ffdhe3072", 00:18:49.156 "ffdhe4096", 00:18:49.156 "ffdhe6144", 00:18:49.156 "ffdhe8192" 00:18:49.156 ] 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_nvme_set_hotplug", 00:18:49.156 "params": { 00:18:49.156 "period_us": 100000, 00:18:49.156 "enable": false 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_malloc_create", 00:18:49.156 "params": { 00:18:49.156 "name": "malloc0", 00:18:49.156 "num_blocks": 8192, 00:18:49.156 "block_size": 4096, 00:18:49.156 "physical_block_size": 4096, 00:18:49.156 "uuid": "5fd512a5-3406-41f8-acbe-b720cfe01d06", 00:18:49.156 "optimal_io_boundary": 0 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "bdev_wait_for_examine" 00:18:49.156 } 00:18:49.156 ] 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "subsystem": "nbd", 00:18:49.156 "config": [] 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "subsystem": "scheduler", 00:18:49.156 "config": [ 00:18:49.156 { 00:18:49.156 "method": "framework_set_scheduler", 00:18:49.156 "params": { 00:18:49.156 "name": "static" 00:18:49.156 } 00:18:49.156 } 00:18:49.156 ] 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "subsystem": "nvmf", 00:18:49.156 "config": [ 00:18:49.156 { 00:18:49.156 "method": "nvmf_set_config", 00:18:49.156 "params": { 00:18:49.156 "discovery_filter": "match_any", 00:18:49.156 "admin_cmd_passthru": { 00:18:49.156 "identify_ctrlr": false 00:18:49.156 } 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_set_max_subsystems", 00:18:49.156 "params": { 00:18:49.156 "max_subsystems": 1024 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_set_crdt", 00:18:49.156 "params": { 00:18:49.156 "crdt1": 0, 00:18:49.156 "crdt2": 0, 00:18:49.156 "crdt3": 0 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_create_transport", 00:18:49.156 "params": { 00:18:49.156 "trtype": "TCP", 00:18:49.156 "max_queue_depth": 128, 00:18:49.156 "max_io_qpairs_per_ctrlr": 127, 00:18:49.156 "in_capsule_data_size": 4096, 00:18:49.156 "max_io_size": 131072, 00:18:49.156 "io_unit_size": 131072, 00:18:49.156 "max_aq_depth": 128, 00:18:49.156 "num_shared_buffers": 511, 00:18:49.156 "buf_cache_size": 4294967295, 00:18:49.156 "dif_insert_or_strip": false, 00:18:49.156 "zcopy": false, 00:18:49.156 "c2h_success": false, 00:18:49.156 "sock_priority": 0, 00:18:49.156 "abort_timeout_sec": 1, 00:18:49.156 "ack_timeout": 0, 00:18:49.156 "data_wr_pool_size": 0 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_create_subsystem", 00:18:49.156 "params": { 00:18:49.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.156 "allow_any_host": false, 00:18:49.156 "serial_number": "00000000000000000000", 00:18:49.156 "model_number": "SPDK bdev Controller", 00:18:49.156 "max_namespaces": 32, 00:18:49.156 "min_cntlid": 1, 00:18:49.156 "max_cntlid": 65519, 00:18:49.156 "ana_reporting": false 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_subsystem_add_host", 00:18:49.156 "params": { 00:18:49.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.156 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.156 "psk": "key0" 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_subsystem_add_ns", 00:18:49.156 "params": { 00:18:49.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.156 "namespace": { 00:18:49.156 "nsid": 1, 00:18:49.156 "bdev_name": "malloc0", 00:18:49.156 "nguid": "5FD512A5340641F8ACBEB720CFE01D06", 00:18:49.156 "uuid": "5fd512a5-3406-41f8-acbe-b720cfe01d06", 00:18:49.156 "no_auto_visible": false 00:18:49.156 } 00:18:49.156 } 00:18:49.156 }, 00:18:49.156 { 00:18:49.156 "method": "nvmf_subsystem_add_listener", 00:18:49.156 "params": { 00:18:49.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.156 "listen_address": { 00:18:49.156 "trtype": "TCP", 00:18:49.156 "adrfam": "IPv4", 00:18:49.156 "traddr": "10.0.0.2", 00:18:49.156 "trsvcid": "4420" 00:18:49.156 }, 00:18:49.156 "secure_channel": false, 00:18:49.156 "sock_impl": "ssl" 00:18:49.156 } 00:18:49.156 } 00:18:49.156 ] 00:18:49.156 } 00:18:49.156 ] 00:18:49.156 }' 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3810976 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3810976 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810976 ']' 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.156 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.156 [2024-07-15 23:45:24.094689] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:49.156 [2024-07-15 23:45:24.094774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.156 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.156 [2024-07-15 23:45:24.161759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.156 [2024-07-15 23:45:24.267826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.156 [2024-07-15 23:45:24.267894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.156 [2024-07-15 23:45:24.267907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.156 [2024-07-15 23:45:24.267918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.156 [2024-07-15 23:45:24.267934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.156 [2024-07-15 23:45:24.268061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.414 [2024-07-15 23:45:24.502564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.414 [2024-07-15 23:45:24.534605] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.671 [2024-07-15 23:45:24.547157] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.234 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.234 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.234 23:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.234 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.234 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3811128 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3811128 /var/tmp/bdevperf.sock 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811128 ']' 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.235 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:50.235 "subsystems": [ 00:18:50.235 { 00:18:50.235 "subsystem": "keyring", 00:18:50.235 "config": [ 00:18:50.235 { 00:18:50.235 "method": "keyring_file_add_key", 00:18:50.235 "params": { 00:18:50.235 "name": "key0", 00:18:50.235 "path": "/tmp/tmp.JMaqrwsgzY" 00:18:50.235 } 00:18:50.235 } 00:18:50.235 ] 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "subsystem": "iobuf", 00:18:50.235 "config": [ 00:18:50.235 { 00:18:50.235 "method": "iobuf_set_options", 00:18:50.235 "params": { 00:18:50.235 "small_pool_count": 8192, 00:18:50.235 "large_pool_count": 1024, 00:18:50.235 "small_bufsize": 8192, 00:18:50.235 "large_bufsize": 135168 00:18:50.235 } 00:18:50.235 } 00:18:50.235 ] 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "subsystem": "sock", 00:18:50.235 "config": [ 00:18:50.235 { 00:18:50.235 "method": "sock_set_default_impl", 00:18:50.235 "params": { 00:18:50.235 "impl_name": "posix" 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "sock_impl_set_options", 00:18:50.235 "params": { 00:18:50.235 "impl_name": "ssl", 00:18:50.235 "recv_buf_size": 4096, 00:18:50.235 "send_buf_size": 4096, 00:18:50.235 "enable_recv_pipe": true, 00:18:50.235 "enable_quickack": false, 00:18:50.235 "enable_placement_id": 0, 00:18:50.235 "enable_zerocopy_send_server": true, 00:18:50.235 "enable_zerocopy_send_client": false, 00:18:50.235 "zerocopy_threshold": 0, 00:18:50.235 "tls_version": 0, 00:18:50.235 "enable_ktls": false 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "sock_impl_set_options", 00:18:50.235 "params": { 00:18:50.235 "impl_name": "posix", 00:18:50.235 "recv_buf_size": 2097152, 00:18:50.235 "send_buf_size": 2097152, 00:18:50.235 "enable_recv_pipe": true, 00:18:50.235 "enable_quickack": false, 00:18:50.235 "enable_placement_id": 0, 00:18:50.235 "enable_zerocopy_send_server": true, 00:18:50.235 "enable_zerocopy_send_client": false, 00:18:50.235 "zerocopy_threshold": 0, 00:18:50.235 "tls_version": 0, 00:18:50.235 "enable_ktls": false 00:18:50.235 } 00:18:50.235 } 00:18:50.235 ] 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "subsystem": "vmd", 00:18:50.235 "config": [] 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "subsystem": "accel", 00:18:50.235 "config": [ 00:18:50.235 { 00:18:50.235 "method": "accel_set_options", 00:18:50.235 "params": { 00:18:50.235 "small_cache_size": 128, 00:18:50.235 "large_cache_size": 16, 00:18:50.235 "task_count": 2048, 00:18:50.235 "sequence_count": 2048, 00:18:50.235 "buf_count": 2048 00:18:50.235 } 00:18:50.235 } 00:18:50.235 ] 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "subsystem": "bdev", 00:18:50.235 "config": [ 00:18:50.235 { 00:18:50.235 "method": "bdev_set_options", 00:18:50.235 "params": { 00:18:50.235 "bdev_io_pool_size": 65535, 00:18:50.235 "bdev_io_cache_size": 256, 00:18:50.235 "bdev_auto_examine": true, 00:18:50.235 "iobuf_small_cache_size": 128, 00:18:50.235 "iobuf_large_cache_size": 16 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "bdev_raid_set_options", 00:18:50.235 "params": { 00:18:50.235 "process_window_size_kb": 1024 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "bdev_iscsi_set_options", 00:18:50.235 "params": { 00:18:50.235 "timeout_sec": 30 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "bdev_nvme_set_options", 00:18:50.235 "params": { 00:18:50.235 "action_on_timeout": "none", 00:18:50.235 "timeout_us": 0, 00:18:50.235 "timeout_admin_us": 0, 00:18:50.235 "keep_alive_timeout_ms": 10000, 00:18:50.235 "arbitration_burst": 0, 00:18:50.235 "low_priority_weight": 0, 00:18:50.235 "medium_priority_weight": 0, 00:18:50.235 "high_priority_weight": 0, 00:18:50.235 "nvme_adminq_poll_period_us": 10000, 00:18:50.235 "nvme_ioq_poll_period_us": 0, 00:18:50.235 "io_queue_requests": 512, 00:18:50.235 "delay_cmd_submit": true, 00:18:50.235 "transport_retry_count": 4, 00:18:50.235 "bdev_retry_count": 3, 00:18:50.235 "transport_ack_timeout": 0, 00:18:50.235 "ctrlr_loss_timeout_sec": 0, 00:18:50.235 "reconnect_delay_sec": 0, 00:18:50.235 "fast_io_fail_timeout_sec": 0, 00:18:50.235 "disable_auto_failback": false, 00:18:50.235 "generate_uuids": false, 00:18:50.235 "transport_tos": 0, 00:18:50.235 "nvme_error_stat": false, 00:18:50.235 "rdma_srq_size": 0, 00:18:50.235 "io_path_stat": false, 00:18:50.235 "allow_accel_sequence": false, 00:18:50.235 "rdma_max_cq_size": 0, 00:18:50.235 "rdma_cm_event_timeout_ms": 0, 00:18:50.235 "dhchap_digests": [ 00:18:50.235 "sha256", 00:18:50.235 "sha384", 00:18:50.235 "sh 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.235 a512" 00:18:50.235 ], 00:18:50.235 "dhchap_dhgroups": [ 00:18:50.235 "null", 00:18:50.235 "ffdhe2048", 00:18:50.235 "ffdhe3072", 00:18:50.235 "ffdhe4096", 00:18:50.235 "ffdhe6144", 00:18:50.235 "ffdhe8192" 00:18:50.235 ] 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "bdev_nvme_attach_controller", 00:18:50.235 "params": { 00:18:50.235 "name": "nvme0", 00:18:50.235 "trtype": "TCP", 00:18:50.235 "adrfam": "IPv4", 00:18:50.235 "traddr": "10.0.0.2", 00:18:50.235 "trsvcid": "4420", 00:18:50.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.235 "prchk_reftag": false, 00:18:50.235 "prchk_guard": false, 00:18:50.235 "ctrlr_loss_timeout_sec": 0, 00:18:50.235 "reconnect_delay_sec": 0, 00:18:50.235 "fast_io_fail_timeout_sec": 0, 00:18:50.235 "psk": "key0", 00:18:50.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.235 "hdgst": false, 00:18:50.235 "ddgst": false 00:18:50.235 } 00:18:50.235 }, 00:18:50.235 { 00:18:50.235 "method": "bdev_nvme_set_hotplug", 00:18:50.235 "params": { 00:18:50.235 "period_us": 100000, 00:18:50.235 "enable": false 00:18:50.235 } 00:18:50.236 }, 00:18:50.236 { 00:18:50.236 "method": "bdev_enable_histogram", 00:18:50.236 "params": { 00:18:50.236 "name": "nvme0n1", 00:18:50.236 "enable": true 00:18:50.236 } 00:18:50.236 }, 00:18:50.236 { 00:18:50.236 "method": "bdev_wait_for_examine" 00:18:50.236 } 00:18:50.236 ] 00:18:50.236 }, 00:18:50.236 { 00:18:50.236 "subsystem": "nbd", 00:18:50.236 "config": [] 00:18:50.236 } 00:18:50.236 ] 00:18:50.236 }' 00:18:50.236 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.236 [2024-07-15 23:45:25.132158] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:50.236 [2024-07-15 23:45:25.132246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811128 ] 00:18:50.236 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.236 [2024-07-15 23:45:25.190055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.236 [2024-07-15 23:45:25.298716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.492 [2024-07-15 23:45:25.462714] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.056 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.056 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.056 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.056 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:51.312 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.312 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.569 Running I/O for 1 seconds... 00:18:52.499 00:18:52.499 Latency(us) 00:18:52.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.499 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:52.499 Verification LBA range: start 0x0 length 0x2000 00:18:52.499 nvme0n1 : 1.02 3269.21 12.77 0.00 0.00 38739.30 7718.68 44273.21 00:18:52.499 =================================================================================================================== 00:18:52.499 Total : 3269.21 12.77 0.00 0.00 38739.30 7718.68 44273.21 00:18:52.499 0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:52.499 nvmf_trace.0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3811128 00:18:52.499 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811128 ']' 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811128 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811128 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811128' 00:18:52.500 killing process with pid 3811128 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811128 00:18:52.500 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.500 00:18:52.500 Latency(us) 00:18:52.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.500 =================================================================================================================== 00:18:52.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.500 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811128 00:18:52.757 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:52.757 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.757 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.015 rmmod nvme_tcp 00:18:53.015 rmmod nvme_fabrics 00:18:53.015 rmmod nvme_keyring 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3810976 ']' 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3810976 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810976 ']' 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810976 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810976 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810976' 00:18:53.015 killing process with pid 3810976 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810976 00:18:53.015 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810976 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.273 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.171 23:45:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:55.171 23:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tLoC807Z0B /tmp/tmp.GCuKgIgJWW /tmp/tmp.JMaqrwsgzY 00:18:55.171 00:18:55.171 real 1m20.052s 00:18:55.171 user 2m7.313s 00:18:55.171 sys 0m26.101s 00:18:55.171 23:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.171 23:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.171 ************************************ 00:18:55.171 END TEST nvmf_tls 00:18:55.171 ************************************ 00:18:55.429 23:45:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:55.429 23:45:30 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:55.429 23:45:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:55.429 23:45:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.429 23:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.430 ************************************ 00:18:55.430 START TEST nvmf_fips 00:18:55.430 ************************************ 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:55.430 * Looking for test storage... 00:18:55.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:55.430 Error setting digest 00:18:55.430 00A243897F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:55.430 00A243897F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.430 23:45:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:57.959 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.959 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:57.960 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:57.960 Found net devices under 0000:09:00.0: cvl_0_0 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:57.960 Found net devices under 0000:09:00.1: cvl_0_1 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:57.960 00:18:57.960 --- 10.0.0.2 ping statistics --- 00:18:57.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.960 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:18:57.960 00:18:57.960 --- 10.0.0.1 ping statistics --- 00:18:57.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.960 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3813480 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3813480 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3813480 ']' 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.960 23:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.960 [2024-07-15 23:45:32.869451] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:57.960 [2024-07-15 23:45:32.869525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.960 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.960 [2024-07-15 23:45:32.930565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.960 [2024-07-15 23:45:33.034816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.960 [2024-07-15 23:45:33.034875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.960 [2024-07-15 23:45:33.034898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.960 [2024-07-15 23:45:33.034909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.960 [2024-07-15 23:45:33.034918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.961 [2024-07-15 23:45:33.034943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.892 23:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.150 [2024-07-15 23:45:34.141868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.150 [2024-07-15 23:45:34.157855] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.150 [2024-07-15 23:45:34.158105] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.150 [2024-07-15 23:45:34.188919] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:59.150 malloc0 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3813646 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3813646 /var/tmp/bdevperf.sock 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3813646 ']' 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.150 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.408 [2024-07-15 23:45:34.283345] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:18:59.408 [2024-07-15 23:45:34.283440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813646 ] 00:18:59.408 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.408 [2024-07-15 23:45:34.340193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.408 [2024-07-15 23:45:34.446911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.665 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.665 23:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:59.665 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:59.922 [2024-07-15 23:45:34.829404] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.922 [2024-07-15 23:45:34.829523] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.922 TLSTESTn1 00:18:59.922 23:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.922 Running I/O for 10 seconds... 00:19:12.110 00:19:12.110 Latency(us) 00:19:12.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.110 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.110 Verification LBA range: start 0x0 length 0x2000 00:19:12.110 TLSTESTn1 : 10.02 3197.55 12.49 0.00 0.00 39956.44 8592.50 40001.23 00:19:12.110 =================================================================================================================== 00:19:12.110 Total : 3197.55 12.49 0.00 0.00 39956.44 8592.50 40001.23 00:19:12.110 0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:12.110 nvmf_trace.0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3813646 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3813646 ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3813646 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3813646 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3813646' 00:19:12.110 killing process with pid 3813646 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3813646 00:19:12.110 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.110 00:19:12.110 Latency(us) 00:19:12.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.110 =================================================================================================================== 00:19:12.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.110 [2024-07-15 23:45:45.185905] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3813646 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.110 rmmod nvme_tcp 00:19:12.110 rmmod nvme_fabrics 00:19:12.110 rmmod nvme_keyring 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3813480 ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3813480 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3813480 ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3813480 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3813480 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3813480' 00:19:12.110 killing process with pid 3813480 00:19:12.110 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3813480 00:19:12.110 [2024-07-15 23:45:45.501179] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3813480 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.111 23:45:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.675 23:45:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.675 23:45:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:12.675 00:19:12.675 real 0m17.455s 00:19:12.675 user 0m19.003s 00:19:12.675 sys 0m6.996s 00:19:12.675 23:45:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.675 23:45:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.675 ************************************ 00:19:12.675 END TEST nvmf_fips 00:19:12.675 ************************************ 00:19:12.969 23:45:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:12.969 23:45:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:12.969 23:45:47 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:12.969 23:45:47 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:12.969 23:45:47 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:12.969 23:45:47 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.969 23:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:14.871 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:14.871 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:14.871 Found net devices under 0000:09:00.0: cvl_0_0 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:14.871 Found net devices under 0000:09:00.1: cvl_0_1 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:14.871 23:45:49 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:14.872 23:45:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:14.872 23:45:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.872 23:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:15.130 ************************************ 00:19:15.130 START TEST nvmf_perf_adq 00:19:15.130 ************************************ 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.130 * Looking for test storage... 00:19:15.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.130 23:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:17.028 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:17.028 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:17.028 Found net devices under 0000:09:00.0: cvl_0_0 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:17.028 Found net devices under 0000:09:00.1: cvl_0_1 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:17.028 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:17.960 23:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:19.870 23:45:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:25.193 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:25.193 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:25.193 Found net devices under 0000:09:00.0: cvl_0_0 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:25.193 Found net devices under 0000:09:00.1: cvl_0_1 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:25.193 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:25.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:19:25.194 00:19:25.194 --- 10.0.0.2 ping statistics --- 00:19:25.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.194 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:25.194 00:19:25.194 --- 10.0.0.1 ping statistics --- 00:19:25.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.194 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3819507 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3819507 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3819507 ']' 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.194 23:45:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.194 [2024-07-15 23:45:59.949753] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:19:25.194 [2024-07-15 23:45:59.949835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.194 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.194 [2024-07-15 23:46:00.013926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.194 [2024-07-15 23:46:00.117771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.194 [2024-07-15 23:46:00.117830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.194 [2024-07-15 23:46:00.117844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.194 [2024-07-15 23:46:00.117854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.194 [2024-07-15 23:46:00.117863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.194 [2024-07-15 23:46:00.117945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.194 [2024-07-15 23:46:00.118079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.194 [2024-07-15 23:46:00.118106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.194 [2024-07-15 23:46:00.118109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.194 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 [2024-07-15 23:46:00.325675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 Malloc1 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.451 [2024-07-15 23:46:00.375911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3819544 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:25.451 23:46:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:25.451 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:27.350 "tick_rate": 2700000000, 00:19:27.350 "poll_groups": [ 00:19:27.350 { 00:19:27.350 "name": "nvmf_tgt_poll_group_000", 00:19:27.350 "admin_qpairs": 1, 00:19:27.350 "io_qpairs": 1, 00:19:27.350 "current_admin_qpairs": 1, 00:19:27.350 "current_io_qpairs": 1, 00:19:27.350 "pending_bdev_io": 0, 00:19:27.350 "completed_nvme_io": 18725, 00:19:27.350 "transports": [ 00:19:27.350 { 00:19:27.350 "trtype": "TCP" 00:19:27.350 } 00:19:27.350 ] 00:19:27.350 }, 00:19:27.350 { 00:19:27.350 "name": "nvmf_tgt_poll_group_001", 00:19:27.350 "admin_qpairs": 0, 00:19:27.350 "io_qpairs": 1, 00:19:27.350 "current_admin_qpairs": 0, 00:19:27.350 "current_io_qpairs": 1, 00:19:27.350 "pending_bdev_io": 0, 00:19:27.350 "completed_nvme_io": 21381, 00:19:27.350 "transports": [ 00:19:27.350 { 00:19:27.350 "trtype": "TCP" 00:19:27.350 } 00:19:27.350 ] 00:19:27.350 }, 00:19:27.350 { 00:19:27.350 "name": "nvmf_tgt_poll_group_002", 00:19:27.350 "admin_qpairs": 0, 00:19:27.350 "io_qpairs": 1, 00:19:27.350 "current_admin_qpairs": 0, 00:19:27.350 "current_io_qpairs": 1, 00:19:27.350 "pending_bdev_io": 0, 00:19:27.350 "completed_nvme_io": 20826, 00:19:27.350 "transports": [ 00:19:27.350 { 00:19:27.350 "trtype": "TCP" 00:19:27.350 } 00:19:27.350 ] 00:19:27.350 }, 00:19:27.350 { 00:19:27.350 "name": "nvmf_tgt_poll_group_003", 00:19:27.350 "admin_qpairs": 0, 00:19:27.350 "io_qpairs": 1, 00:19:27.350 "current_admin_qpairs": 0, 00:19:27.350 "current_io_qpairs": 1, 00:19:27.350 "pending_bdev_io": 0, 00:19:27.350 "completed_nvme_io": 20865, 00:19:27.350 "transports": [ 00:19:27.350 { 00:19:27.350 "trtype": "TCP" 00:19:27.350 } 00:19:27.350 ] 00:19:27.350 } 00:19:27.350 ] 00:19:27.350 }' 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:27.350 23:46:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3819544 00:19:35.484 Initializing NVMe Controllers 00:19:35.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:35.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:35.484 Initialization complete. Launching workers. 00:19:35.484 ======================================================== 00:19:35.484 Latency(us) 00:19:35.484 Device Information : IOPS MiB/s Average min max 00:19:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10963.20 42.82 5838.60 4955.43 7595.62 00:19:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11221.60 43.83 5703.58 2727.64 8292.74 00:19:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10947.60 42.76 5847.29 2818.01 8501.38 00:19:35.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9836.80 38.42 6507.96 2702.74 9664.19 00:19:35.484 ======================================================== 00:19:35.484 Total : 42969.19 167.85 5958.79 2702.74 9664.19 00:19:35.484 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.484 rmmod nvme_tcp 00:19:35.484 rmmod nvme_fabrics 00:19:35.484 rmmod nvme_keyring 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3819507 ']' 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3819507 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3819507 ']' 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3819507 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3819507 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3819507' 00:19:35.484 killing process with pid 3819507 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3819507 00:19:35.484 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3819507 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.742 23:46:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.284 23:46:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.284 23:46:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:38.284 23:46:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:38.541 23:46:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:40.439 23:46:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:45.715 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:45.715 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:45.715 Found net devices under 0000:09:00.0: cvl_0_0 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:45.715 Found net devices under 0000:09:00.1: cvl_0_1 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:19:45.715 00:19:45.715 --- 10.0.0.2 ping statistics --- 00:19:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.715 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:45.715 00:19:45.715 --- 10.0.0.1 ping statistics --- 00:19:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.715 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.715 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:45.716 net.core.busy_poll = 1 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:45.716 net.core.busy_read = 1 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3822159 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3822159 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3822159 ']' 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.716 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.974 [2024-07-15 23:46:20.872436] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:19:45.974 [2024-07-15 23:46:20.872540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.974 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.974 [2024-07-15 23:46:20.938803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.974 [2024-07-15 23:46:21.046844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.974 [2024-07-15 23:46:21.046896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.974 [2024-07-15 23:46:21.046916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.974 [2024-07-15 23:46:21.046932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.974 [2024-07-15 23:46:21.046974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.974 [2024-07-15 23:46:21.047037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.974 [2024-07-15 23:46:21.047094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.974 [2024-07-15 23:46:21.047179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.974 [2024-07-15 23:46:21.047171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.231 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.231 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:46.231 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 [2024-07-15 23:46:21.302992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 Malloc1 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.232 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 [2024-07-15 23:46:21.355929] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.490 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.490 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3822306 00:19:46.490 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:46.490 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:46.490 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.388 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:48.388 23:46:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.388 23:46:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.388 23:46:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.388 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:48.388 "tick_rate": 2700000000, 00:19:48.388 "poll_groups": [ 00:19:48.388 { 00:19:48.388 "name": "nvmf_tgt_poll_group_000", 00:19:48.388 "admin_qpairs": 1, 00:19:48.388 "io_qpairs": 1, 00:19:48.388 "current_admin_qpairs": 1, 00:19:48.388 "current_io_qpairs": 1, 00:19:48.388 "pending_bdev_io": 0, 00:19:48.388 "completed_nvme_io": 25475, 00:19:48.388 "transports": [ 00:19:48.388 { 00:19:48.388 "trtype": "TCP" 00:19:48.388 } 00:19:48.388 ] 00:19:48.388 }, 00:19:48.388 { 00:19:48.388 "name": "nvmf_tgt_poll_group_001", 00:19:48.388 "admin_qpairs": 0, 00:19:48.388 "io_qpairs": 3, 00:19:48.388 "current_admin_qpairs": 0, 00:19:48.388 "current_io_qpairs": 3, 00:19:48.388 "pending_bdev_io": 0, 00:19:48.388 "completed_nvme_io": 26949, 00:19:48.388 "transports": [ 00:19:48.388 { 00:19:48.388 "trtype": "TCP" 00:19:48.388 } 00:19:48.388 ] 00:19:48.388 }, 00:19:48.388 { 00:19:48.388 "name": "nvmf_tgt_poll_group_002", 00:19:48.388 "admin_qpairs": 0, 00:19:48.388 "io_qpairs": 0, 00:19:48.388 "current_admin_qpairs": 0, 00:19:48.388 "current_io_qpairs": 0, 00:19:48.388 "pending_bdev_io": 0, 00:19:48.388 "completed_nvme_io": 0, 00:19:48.388 "transports": [ 00:19:48.388 { 00:19:48.388 "trtype": "TCP" 00:19:48.388 } 00:19:48.389 ] 00:19:48.389 }, 00:19:48.389 { 00:19:48.389 "name": "nvmf_tgt_poll_group_003", 00:19:48.389 "admin_qpairs": 0, 00:19:48.389 "io_qpairs": 0, 00:19:48.389 "current_admin_qpairs": 0, 00:19:48.389 "current_io_qpairs": 0, 00:19:48.389 "pending_bdev_io": 0, 00:19:48.389 "completed_nvme_io": 0, 00:19:48.389 "transports": [ 00:19:48.389 { 00:19:48.389 "trtype": "TCP" 00:19:48.389 } 00:19:48.389 ] 00:19:48.389 } 00:19:48.389 ] 00:19:48.389 }' 00:19:48.389 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:48.389 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:48.389 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:48.389 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:48.389 23:46:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3822306 00:19:56.504 Initializing NVMe Controllers 00:19:56.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:56.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:56.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:56.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:56.504 Initialization complete. Launching workers. 00:19:56.504 ======================================================== 00:19:56.504 Latency(us) 00:19:56.504 Device Information : IOPS MiB/s Average min max 00:19:56.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4943.40 19.31 12949.45 2029.68 62519.28 00:19:56.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4247.40 16.59 15099.67 1857.44 63352.36 00:19:56.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13610.80 53.17 4701.98 1694.44 6769.03 00:19:56.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4878.00 19.05 13123.43 1771.51 60095.55 00:19:56.504 ======================================================== 00:19:56.504 Total : 27679.60 108.12 9254.56 1694.44 63352.36 00:19:56.504 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.504 rmmod nvme_tcp 00:19:56.504 rmmod nvme_fabrics 00:19:56.504 rmmod nvme_keyring 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:56.504 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3822159 ']' 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3822159 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3822159 ']' 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3822159 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3822159 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3822159' 00:19:56.505 killing process with pid 3822159 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3822159 00:19:56.505 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3822159 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.764 23:46:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.054 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.054 23:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:00.054 00:20:00.054 real 0m44.917s 00:20:00.054 user 2m38.633s 00:20:00.054 sys 0m9.790s 00:20:00.054 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.054 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.054 ************************************ 00:20:00.054 END TEST nvmf_perf_adq 00:20:00.054 ************************************ 00:20:00.054 23:46:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:00.054 23:46:34 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:00.054 23:46:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:00.054 23:46:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.054 23:46:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.054 ************************************ 00:20:00.054 START TEST nvmf_shutdown 00:20:00.054 ************************************ 00:20:00.054 23:46:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:00.054 * Looking for test storage... 00:20:00.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:00.055 ************************************ 00:20:00.055 START TEST nvmf_shutdown_tc1 00:20:00.055 ************************************ 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.055 23:46:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:02.589 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:02.589 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:02.589 Found net devices under 0000:09:00.0: cvl_0_0 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:02.589 Found net devices under 0000:09:00.1: cvl_0_1 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.589 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:20:02.590 00:20:02.590 --- 10.0.0.2 ping statistics --- 00:20:02.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.590 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:20:02.590 00:20:02.590 --- 10.0.0.1 ping statistics --- 00:20:02.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.590 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3825590 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3825590 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3825590 ']' 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.590 [2024-07-15 23:46:37.339828] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:02.590 [2024-07-15 23:46:37.339916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.590 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.590 [2024-07-15 23:46:37.407892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.590 [2024-07-15 23:46:37.515695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.590 [2024-07-15 23:46:37.515765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.590 [2024-07-15 23:46:37.515793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.590 [2024-07-15 23:46:37.515804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.590 [2024-07-15 23:46:37.515814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.590 [2024-07-15 23:46:37.516981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.590 [2024-07-15 23:46:37.517041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.590 [2024-07-15 23:46:37.517106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.590 [2024-07-15 23:46:37.517109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.590 [2024-07-15 23:46:37.674839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.590 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.848 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.849 23:46:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.849 Malloc1 00:20:02.849 [2024-07-15 23:46:37.763140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.849 Malloc2 00:20:02.849 Malloc3 00:20:02.849 Malloc4 00:20:02.849 Malloc5 00:20:03.106 Malloc6 00:20:03.106 Malloc7 00:20:03.106 Malloc8 00:20:03.106 Malloc9 00:20:03.106 Malloc10 00:20:03.106 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.106 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3825671 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3825671 /var/tmp/bdevperf.sock 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3825671 ']' 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.107 { 00:20:03.107 "params": { 00:20:03.107 "name": "Nvme$subsystem", 00:20:03.107 "trtype": "$TEST_TRANSPORT", 00:20:03.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.107 "adrfam": "ipv4", 00:20:03.107 "trsvcid": "$NVMF_PORT", 00:20:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.107 "hdgst": ${hdgst:-false}, 00:20:03.107 "ddgst": ${ddgst:-false} 00:20:03.107 }, 00:20:03.107 "method": "bdev_nvme_attach_controller" 00:20:03.107 } 00:20:03.107 EOF 00:20:03.107 )") 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.107 { 00:20:03.107 "params": { 00:20:03.107 "name": "Nvme$subsystem", 00:20:03.107 "trtype": "$TEST_TRANSPORT", 00:20:03.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.107 "adrfam": "ipv4", 00:20:03.107 "trsvcid": "$NVMF_PORT", 00:20:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.107 "hdgst": ${hdgst:-false}, 00:20:03.107 "ddgst": ${ddgst:-false} 00:20:03.107 }, 00:20:03.107 "method": "bdev_nvme_attach_controller" 00:20:03.107 } 00:20:03.107 EOF 00:20:03.107 )") 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.107 { 00:20:03.107 "params": { 00:20:03.107 "name": "Nvme$subsystem", 00:20:03.107 "trtype": "$TEST_TRANSPORT", 00:20:03.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.107 "adrfam": "ipv4", 00:20:03.107 "trsvcid": "$NVMF_PORT", 00:20:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.107 "hdgst": ${hdgst:-false}, 00:20:03.107 "ddgst": ${ddgst:-false} 00:20:03.107 }, 00:20:03.107 "method": "bdev_nvme_attach_controller" 00:20:03.107 } 00:20:03.107 EOF 00:20:03.107 )") 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.107 { 00:20:03.107 "params": { 00:20:03.107 "name": "Nvme$subsystem", 00:20:03.107 "trtype": "$TEST_TRANSPORT", 00:20:03.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.107 "adrfam": "ipv4", 00:20:03.107 "trsvcid": "$NVMF_PORT", 00:20:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.107 "hdgst": ${hdgst:-false}, 00:20:03.107 "ddgst": ${ddgst:-false} 00:20:03.107 }, 00:20:03.107 "method": "bdev_nvme_attach_controller" 00:20:03.107 } 00:20:03.107 EOF 00:20:03.107 )") 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.107 { 00:20:03.107 "params": { 00:20:03.107 "name": "Nvme$subsystem", 00:20:03.107 "trtype": "$TEST_TRANSPORT", 00:20:03.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.107 "adrfam": "ipv4", 00:20:03.107 "trsvcid": "$NVMF_PORT", 00:20:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.107 "hdgst": ${hdgst:-false}, 00:20:03.107 "ddgst": ${ddgst:-false} 00:20:03.107 }, 00:20:03.107 "method": "bdev_nvme_attach_controller" 00:20:03.107 } 00:20:03.107 EOF 00:20:03.107 )") 00:20:03.107 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.366 { 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme$subsystem", 00:20:03.366 "trtype": "$TEST_TRANSPORT", 00:20:03.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "$NVMF_PORT", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.366 "hdgst": ${hdgst:-false}, 00:20:03.366 "ddgst": ${ddgst:-false} 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 } 00:20:03.366 EOF 00:20:03.366 )") 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.366 { 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme$subsystem", 00:20:03.366 "trtype": "$TEST_TRANSPORT", 00:20:03.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "$NVMF_PORT", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.366 "hdgst": ${hdgst:-false}, 00:20:03.366 "ddgst": ${ddgst:-false} 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 } 00:20:03.366 EOF 00:20:03.366 )") 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.366 { 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme$subsystem", 00:20:03.366 "trtype": "$TEST_TRANSPORT", 00:20:03.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "$NVMF_PORT", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.366 "hdgst": ${hdgst:-false}, 00:20:03.366 "ddgst": ${ddgst:-false} 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 } 00:20:03.366 EOF 00:20:03.366 )") 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.366 { 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme$subsystem", 00:20:03.366 "trtype": "$TEST_TRANSPORT", 00:20:03.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "$NVMF_PORT", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.366 "hdgst": ${hdgst:-false}, 00:20:03.366 "ddgst": ${ddgst:-false} 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 } 00:20:03.366 EOF 00:20:03.366 )") 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.366 { 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme$subsystem", 00:20:03.366 "trtype": "$TEST_TRANSPORT", 00:20:03.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "$NVMF_PORT", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.366 "hdgst": ${hdgst:-false}, 00:20:03.366 "ddgst": ${ddgst:-false} 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 } 00:20:03.366 EOF 00:20:03.366 )") 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.366 23:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme1", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme2", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme3", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme4", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme5", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme6", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme7", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme8", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme9", 00:20:03.366 "trtype": "tcp", 00:20:03.366 "traddr": "10.0.0.2", 00:20:03.366 "adrfam": "ipv4", 00:20:03.366 "trsvcid": "4420", 00:20:03.366 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.366 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.366 "hdgst": false, 00:20:03.366 "ddgst": false 00:20:03.366 }, 00:20:03.366 "method": "bdev_nvme_attach_controller" 00:20:03.366 },{ 00:20:03.366 "params": { 00:20:03.366 "name": "Nvme10", 00:20:03.366 "trtype": "tcp", 00:20:03.367 "traddr": "10.0.0.2", 00:20:03.367 "adrfam": "ipv4", 00:20:03.367 "trsvcid": "4420", 00:20:03.367 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.367 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.367 "hdgst": false, 00:20:03.367 "ddgst": false 00:20:03.367 }, 00:20:03.367 "method": "bdev_nvme_attach_controller" 00:20:03.367 }' 00:20:03.367 [2024-07-15 23:46:38.256697] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:03.367 [2024-07-15 23:46:38.256772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:03.367 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.367 [2024-07-15 23:46:38.322380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.367 [2024-07-15 23:46:38.432448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3825671 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:05.264 23:46:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3825671 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3825590 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.197 { 00:20:06.197 "params": { 00:20:06.197 "name": "Nvme$subsystem", 00:20:06.197 "trtype": "$TEST_TRANSPORT", 00:20:06.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.197 "adrfam": "ipv4", 00:20:06.197 "trsvcid": "$NVMF_PORT", 00:20:06.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.197 "hdgst": ${hdgst:-false}, 00:20:06.197 "ddgst": ${ddgst:-false} 00:20:06.197 }, 00:20:06.197 "method": "bdev_nvme_attach_controller" 00:20:06.197 } 00:20:06.197 EOF 00:20:06.197 )") 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.197 { 00:20:06.197 "params": { 00:20:06.197 "name": "Nvme$subsystem", 00:20:06.197 "trtype": "$TEST_TRANSPORT", 00:20:06.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.197 "adrfam": "ipv4", 00:20:06.197 "trsvcid": "$NVMF_PORT", 00:20:06.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.197 "hdgst": ${hdgst:-false}, 00:20:06.197 "ddgst": ${ddgst:-false} 00:20:06.197 }, 00:20:06.197 "method": "bdev_nvme_attach_controller" 00:20:06.197 } 00:20:06.197 EOF 00:20:06.197 )") 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.197 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.197 { 00:20:06.197 "params": { 00:20:06.197 "name": "Nvme$subsystem", 00:20:06.197 "trtype": "$TEST_TRANSPORT", 00:20:06.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.198 { 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme$subsystem", 00:20:06.198 "trtype": "$TEST_TRANSPORT", 00:20:06.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "$NVMF_PORT", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.198 "hdgst": ${hdgst:-false}, 00:20:06.198 "ddgst": ${ddgst:-false} 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 } 00:20:06.198 EOF 00:20:06.198 )") 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:06.198 23:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme1", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme2", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme3", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme4", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme5", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme6", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme7", 00:20:06.198 "trtype": "tcp", 00:20:06.198 "traddr": "10.0.0.2", 00:20:06.198 "adrfam": "ipv4", 00:20:06.198 "trsvcid": "4420", 00:20:06.198 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:06.198 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:06.198 "hdgst": false, 00:20:06.198 "ddgst": false 00:20:06.198 }, 00:20:06.198 "method": "bdev_nvme_attach_controller" 00:20:06.198 },{ 00:20:06.198 "params": { 00:20:06.198 "name": "Nvme8", 00:20:06.198 "trtype": "tcp", 00:20:06.199 "traddr": "10.0.0.2", 00:20:06.199 "adrfam": "ipv4", 00:20:06.199 "trsvcid": "4420", 00:20:06.199 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:06.199 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:06.199 "hdgst": false, 00:20:06.199 "ddgst": false 00:20:06.199 }, 00:20:06.199 "method": "bdev_nvme_attach_controller" 00:20:06.199 },{ 00:20:06.199 "params": { 00:20:06.199 "name": "Nvme9", 00:20:06.199 "trtype": "tcp", 00:20:06.199 "traddr": "10.0.0.2", 00:20:06.199 "adrfam": "ipv4", 00:20:06.199 "trsvcid": "4420", 00:20:06.199 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:06.199 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:06.199 "hdgst": false, 00:20:06.199 "ddgst": false 00:20:06.199 }, 00:20:06.199 "method": "bdev_nvme_attach_controller" 00:20:06.199 },{ 00:20:06.199 "params": { 00:20:06.199 "name": "Nvme10", 00:20:06.199 "trtype": "tcp", 00:20:06.199 "traddr": "10.0.0.2", 00:20:06.199 "adrfam": "ipv4", 00:20:06.199 "trsvcid": "4420", 00:20:06.199 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:06.199 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:06.199 "hdgst": false, 00:20:06.199 "ddgst": false 00:20:06.199 }, 00:20:06.199 "method": "bdev_nvme_attach_controller" 00:20:06.199 }' 00:20:06.199 [2024-07-15 23:46:41.282841] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:06.199 [2024-07-15 23:46:41.282928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826074 ] 00:20:06.199 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.457 [2024-07-15 23:46:41.348748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.457 [2024-07-15 23:46:41.459726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.830 Running I/O for 1 seconds... 00:20:09.201 00:20:09.201 Latency(us) 00:20:09.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.201 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme1n1 : 1.13 231.11 14.44 0.00 0.00 269586.57 8883.77 256318.58 00:20:09.201 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme2n1 : 1.10 235.65 14.73 0.00 0.00 262001.73 8398.32 234570.33 00:20:09.201 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme3n1 : 1.09 234.88 14.68 0.00 0.00 260475.45 21068.61 257872.02 00:20:09.201 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme4n1 : 1.10 237.20 14.83 0.00 0.00 249412.28 10291.58 253211.69 00:20:09.201 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme5n1 : 1.14 224.39 14.02 0.00 0.00 264247.18 21359.88 260978.92 00:20:09.201 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme6n1 : 1.14 223.74 13.98 0.00 0.00 260584.30 23592.96 256318.58 00:20:09.201 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme7n1 : 1.18 270.70 16.92 0.00 0.00 211817.36 13398.47 278066.82 00:20:09.201 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme8n1 : 1.15 223.23 13.95 0.00 0.00 252252.54 18350.08 257872.02 00:20:09.201 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme9n1 : 1.15 221.91 13.87 0.00 0.00 249474.65 18252.99 265639.25 00:20:09.201 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:09.201 Verification LBA range: start 0x0 length 0x400 00:20:09.201 Nvme10n1 : 1.20 267.68 16.73 0.00 0.00 204371.25 5631.24 290494.39 00:20:09.201 =================================================================================================================== 00:20:09.201 Total : 2370.49 148.16 0.00 0.00 246571.95 5631.24 290494.39 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.458 rmmod nvme_tcp 00:20:09.458 rmmod nvme_fabrics 00:20:09.458 rmmod nvme_keyring 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3825590 ']' 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3825590 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3825590 ']' 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3825590 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3825590 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3825590' 00:20:09.458 killing process with pid 3825590 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3825590 00:20:09.458 23:46:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3825590 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.023 23:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.930 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.187 00:20:12.187 real 0m11.979s 00:20:12.187 user 0m34.635s 00:20:12.187 sys 0m3.267s 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:12.187 ************************************ 00:20:12.187 END TEST nvmf_shutdown_tc1 00:20:12.187 ************************************ 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:12.187 ************************************ 00:20:12.187 START TEST nvmf_shutdown_tc2 00:20:12.187 ************************************ 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:12.187 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.187 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:12.188 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:12.188 Found net devices under 0000:09:00.0: cvl_0_0 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:12.188 Found net devices under 0000:09:00.1: cvl_0_1 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:20:12.188 00:20:12.188 --- 10.0.0.2 ping statistics --- 00:20:12.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.188 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:20:12.188 00:20:12.188 --- 10.0.0.1 ping statistics --- 00:20:12.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.188 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3826840 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3826840 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3826840 ']' 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.188 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.445 [2024-07-15 23:46:47.343090] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:12.445 [2024-07-15 23:46:47.343172] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.445 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.445 [2024-07-15 23:46:47.409413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.445 [2024-07-15 23:46:47.518974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.445 [2024-07-15 23:46:47.519042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.445 [2024-07-15 23:46:47.519055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.445 [2024-07-15 23:46:47.519066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.445 [2024-07-15 23:46:47.519076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.445 [2024-07-15 23:46:47.519170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.445 [2024-07-15 23:46:47.519234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.445 [2024-07-15 23:46:47.519258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:12.445 [2024-07-15 23:46:47.519263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 [2024-07-15 23:46:47.681816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.703 23:46:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 Malloc1 00:20:12.703 [2024-07-15 23:46:47.771228] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.703 Malloc2 00:20:12.960 Malloc3 00:20:12.960 Malloc4 00:20:12.960 Malloc5 00:20:12.960 Malloc6 00:20:12.960 Malloc7 00:20:13.218 Malloc8 00:20:13.218 Malloc9 00:20:13.218 Malloc10 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3827024 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3827024 /var/tmp/bdevperf.sock 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3827024 ']' 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:13.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.218 { 00:20:13.218 "params": { 00:20:13.218 "name": "Nvme$subsystem", 00:20:13.218 "trtype": "$TEST_TRANSPORT", 00:20:13.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.218 "adrfam": "ipv4", 00:20:13.218 "trsvcid": "$NVMF_PORT", 00:20:13.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.218 "hdgst": ${hdgst:-false}, 00:20:13.218 "ddgst": ${ddgst:-false} 00:20:13.218 }, 00:20:13.218 "method": "bdev_nvme_attach_controller" 00:20:13.218 } 00:20:13.218 EOF 00:20:13.218 )") 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.218 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.219 { 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme$subsystem", 00:20:13.219 "trtype": "$TEST_TRANSPORT", 00:20:13.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "$NVMF_PORT", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.219 "hdgst": ${hdgst:-false}, 00:20:13.219 "ddgst": ${ddgst:-false} 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 } 00:20:13.219 EOF 00:20:13.219 )") 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.219 { 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme$subsystem", 00:20:13.219 "trtype": "$TEST_TRANSPORT", 00:20:13.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "$NVMF_PORT", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.219 "hdgst": ${hdgst:-false}, 00:20:13.219 "ddgst": ${ddgst:-false} 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 } 00:20:13.219 EOF 00:20:13.219 )") 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.219 { 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme$subsystem", 00:20:13.219 "trtype": "$TEST_TRANSPORT", 00:20:13.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "$NVMF_PORT", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.219 "hdgst": ${hdgst:-false}, 00:20:13.219 "ddgst": ${ddgst:-false} 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 } 00:20:13.219 EOF 00:20:13.219 )") 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:13.219 { 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme$subsystem", 00:20:13.219 "trtype": "$TEST_TRANSPORT", 00:20:13.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "$NVMF_PORT", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.219 "hdgst": ${hdgst:-false}, 00:20:13.219 "ddgst": ${ddgst:-false} 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 } 00:20:13.219 EOF 00:20:13.219 )") 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:13.219 23:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme1", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme2", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme3", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme4", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme5", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme6", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme7", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme8", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme9", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 },{ 00:20:13.219 "params": { 00:20:13.219 "name": "Nvme10", 00:20:13.219 "trtype": "tcp", 00:20:13.219 "traddr": "10.0.0.2", 00:20:13.219 "adrfam": "ipv4", 00:20:13.219 "trsvcid": "4420", 00:20:13.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:13.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:13.219 "hdgst": false, 00:20:13.219 "ddgst": false 00:20:13.219 }, 00:20:13.219 "method": "bdev_nvme_attach_controller" 00:20:13.219 }' 00:20:13.219 [2024-07-15 23:46:48.285120] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:13.219 [2024-07-15 23:46:48.285197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827024 ] 00:20:13.219 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.476 [2024-07-15 23:46:48.348473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.476 [2024-07-15 23:46:48.457884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.383 Running I/O for 10 seconds... 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:15.383 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:15.640 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3827024 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3827024 ']' 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3827024 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3827024 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3827024' 00:20:15.898 killing process with pid 3827024 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3827024 00:20:15.898 23:46:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3827024 00:20:15.898 Received shutdown signal, test time was about 0.983536 seconds 00:20:15.898 00:20:15.898 Latency(us) 00:20:15.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.898 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.898 Verification LBA range: start 0x0 length 0x400 00:20:15.898 Nvme1n1 : 0.95 203.02 12.69 0.00 0.00 311529.88 21651.15 271853.04 00:20:15.898 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme2n1 : 0.93 206.63 12.91 0.00 0.00 299275.00 24855.13 257872.02 00:20:15.899 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme3n1 : 0.97 264.87 16.55 0.00 0.00 229512.72 17282.09 264085.81 00:20:15.899 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme4n1 : 0.98 261.65 16.35 0.00 0.00 227762.25 20777.34 267192.70 00:20:15.899 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme5n1 : 0.96 200.93 12.56 0.00 0.00 290169.24 22136.60 270299.59 00:20:15.899 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme6n1 : 0.96 199.57 12.47 0.00 0.00 286182.97 22816.24 279620.27 00:20:15.899 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme7n1 : 0.94 209.69 13.11 0.00 0.00 264472.04 1796.17 242337.56 00:20:15.899 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme8n1 : 0.98 260.50 16.28 0.00 0.00 210668.85 22330.79 264085.81 00:20:15.899 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme9n1 : 0.92 209.38 13.09 0.00 0.00 252984.64 20194.80 257872.02 00:20:15.899 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:15.899 Verification LBA range: start 0x0 length 0x400 00:20:15.899 Nvme10n1 : 0.97 197.64 12.35 0.00 0.00 265271.12 22622.06 312242.63 00:20:15.899 =================================================================================================================== 00:20:15.899 Total : 2213.87 138.37 0.00 0.00 260053.79 1796.17 312242.63 00:20:16.462 23:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.393 rmmod nvme_tcp 00:20:17.393 rmmod nvme_fabrics 00:20:17.393 rmmod nvme_keyring 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3826840 ']' 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3826840 ']' 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3826840' 00:20:17.393 killing process with pid 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3826840 00:20:17.393 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3826840 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.958 23:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.879 00:20:19.879 real 0m7.866s 00:20:19.879 user 0m23.909s 00:20:19.879 sys 0m1.540s 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:19.879 ************************************ 00:20:19.879 END TEST nvmf_shutdown_tc2 00:20:19.879 ************************************ 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.879 23:46:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:20.138 ************************************ 00:20:20.138 START TEST nvmf_shutdown_tc3 00:20:20.138 ************************************ 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:20.138 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:20.138 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:20.138 Found net devices under 0000:09:00.0: cvl_0_0 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:20.138 Found net devices under 0000:09:00.1: cvl_0_1 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.138 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:20:20.139 00:20:20.139 --- 10.0.0.2 ping statistics --- 00:20:20.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.139 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:20:20.139 00:20:20.139 --- 10.0.0.1 ping statistics --- 00:20:20.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.139 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3827937 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3827937 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3827937 ']' 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.139 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.139 [2024-07-15 23:46:55.258250] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:20.139 [2024-07-15 23:46:55.258348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.397 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.397 [2024-07-15 23:46:55.322310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.397 [2024-07-15 23:46:55.432316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.397 [2024-07-15 23:46:55.432381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.397 [2024-07-15 23:46:55.432409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.397 [2024-07-15 23:46:55.432420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.397 [2024-07-15 23:46:55.432429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.397 [2024-07-15 23:46:55.432491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.397 [2024-07-15 23:46:55.432545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.397 [2024-07-15 23:46:55.432610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.397 [2024-07-15 23:46:55.432612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.655 [2024-07-15 23:46:55.575625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.655 23:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:20.655 Malloc1 00:20:20.655 [2024-07-15 23:46:55.650048] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.655 Malloc2 00:20:20.655 Malloc3 00:20:20.655 Malloc4 00:20:20.913 Malloc5 00:20:20.913 Malloc6 00:20:20.913 Malloc7 00:20:20.913 Malloc8 00:20:20.913 Malloc9 00:20:21.172 Malloc10 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3828114 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3828114 /var/tmp/bdevperf.sock 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3828114 ']' 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.172 "name": "Nvme$subsystem", 00:20:21.172 "trtype": "$TEST_TRANSPORT", 00:20:21.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.172 "adrfam": "ipv4", 00:20:21.172 "trsvcid": "$NVMF_PORT", 00:20:21.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.172 "hdgst": ${hdgst:-false}, 00:20:21.172 "ddgst": ${ddgst:-false} 00:20:21.172 }, 00:20:21.172 "method": "bdev_nvme_attach_controller" 00:20:21.172 } 00:20:21.172 EOF 00:20:21.172 )") 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.172 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.172 { 00:20:21.172 "params": { 00:20:21.173 "name": "Nvme$subsystem", 00:20:21.173 "trtype": "$TEST_TRANSPORT", 00:20:21.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "$NVMF_PORT", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.173 "hdgst": ${hdgst:-false}, 00:20:21.173 "ddgst": ${ddgst:-false} 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 } 00:20:21.173 EOF 00:20:21.173 )") 00:20:21.173 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:21.173 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:21.173 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:21.173 23:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme1", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme2", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme3", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme4", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme5", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme6", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme7", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme8", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme9", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 },{ 00:20:21.173 "params": { 00:20:21.173 "name": "Nvme10", 00:20:21.173 "trtype": "tcp", 00:20:21.173 "traddr": "10.0.0.2", 00:20:21.173 "adrfam": "ipv4", 00:20:21.173 "trsvcid": "4420", 00:20:21.173 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:21.173 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:21.173 "hdgst": false, 00:20:21.173 "ddgst": false 00:20:21.173 }, 00:20:21.173 "method": "bdev_nvme_attach_controller" 00:20:21.173 }' 00:20:21.173 [2024-07-15 23:46:56.169586] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:21.173 [2024-07-15 23:46:56.169665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828114 ] 00:20:21.173 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.173 [2024-07-15 23:46:56.232457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.453 [2024-07-15 23:46:56.342377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.825 Running I/O for 10 seconds... 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:22.825 23:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:23.084 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.341 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:23.341 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:23.341 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3827937 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3827937 ']' 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3827937 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3827937 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3827937' 00:20:23.615 killing process with pid 3827937 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3827937 00:20:23.615 23:46:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3827937 00:20:23.615 [2024-07-15 23:46:58.537612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.537997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.615 [2024-07-15 23:46:58.538136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.538681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6a80 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.542978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.616 [2024-07-15 23:46:58.543708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.543782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9480 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.546999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d6f20 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.547847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.547932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.547973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.548000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.548027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.548050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.548077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.548101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.548125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c0830 is same with the state(5) to be set 00:20:23.617 [2024-07-15 23:46:58.548199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.548263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.617 [2024-07-15 23:46:58.548289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.617 [2024-07-15 23:46:58.548320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288c240 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.618 [2024-07-15 23:46:58.548698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.618 [2024-07-15 23:46:58.548721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275c600 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.548998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.549229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d73c0 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.550994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.618 [2024-07-15 23:46:58.551393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7880 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.551950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.551995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-07-15 23:46:58.552475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 he state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.552502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 he state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-07-15 23:46:58.552606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 he state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7d20 is same with the state(5) to be set 00:20:23.619 [2024-07-15 23:46:58.552623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.552926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.552972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.619 [2024-07-15 23:46:58.553001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.619 [2024-07-15 23:46:58.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.553517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.553542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12he state(5) to be set 00:20:23.620 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12[2024-07-15 23:46:58.553605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.553665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12he state(5) to be set 00:20:23.620 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.553697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.553860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.553878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12he state(5) to be set 00:20:23.620 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.553911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.553981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.553991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.553996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12he state(5) to be set 00:20:23.620 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.554042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.554054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.554077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:12he state(5) to be set 00:20:23.620 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.554100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 23:46:58.554114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 he state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.620 [2024-07-15 23:46:58.554153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.620 [2024-07-15 23:46:58.554166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t[2024-07-15 23:46:58.554162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:23.620 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.620 [2024-07-15 23:46:58.554181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-07-15 23:46:58.554326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 he state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-07-15 23:46:58.554379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 he state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d81e0 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.554397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.554933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.554982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.621 [2024-07-15 23:46:58.555507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.621 [2024-07-15 23:46:58.555624] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2810780 was disconnected and freed. reset controller. 00:20:23.621 [2024-07-15 23:46:58.555754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.555990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.556004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.621 [2024-07-15 23:46:58.556025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.556594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8680 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.557992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.622 [2024-07-15 23:46:58.558277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:23.623 [2024-07-15 23:46:58.558377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ec450 (9): [2024-07-15 23:46:58.558451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with tBad file descriptor 00:20:23.623 he state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8b20 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c0830 (9): Bad file descriptor 00:20:23.623 [2024-07-15 23:46:58.558529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x288c240 (9): Bad file descriptor 00:20:23.623 [2024-07-15 23:46:58.558600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e3280 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.558879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.558967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.558996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2784990 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x275c600 (9): Bad file descriptor 00:20:23.623 [2024-07-15 23:46:58.559209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 23:46:58.559255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 he state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 23:46:58.559283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 he state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 23:46:58.559336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 he state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2784bb0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with t[2024-07-15 23:46:58.559502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:20:23.623 id:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with t[2024-07-15 23:46:58.559533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:20:23.623 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 [2024-07-15 23:46:58.559553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with t[2024-07-15 23:46:58.559561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:20:23.623 id:0 cdw10:00000000 cdw11:00000000 00:20:23.623 [2024-07-15 23:46:58.559580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 23:46:58.559592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.623 he state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.623 [2024-07-15 23:46:58.559620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 23:46:58.559628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:23.624 he state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.559655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.624 [2024-07-15 23:46:58.559686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.559712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with t[2024-07-15 23:46:58.559721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e2c60 is same he state(5) to be set 00:20:23.624 with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.624 [2024-07-15 23:46:58.559801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.559826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 23:46:58.559851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:23.624 he state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 23:46:58.559884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 he state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 23:46:58.559911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:23.624 he state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.559945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with t[2024-07-15 23:46:58.559968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:20:23.624 id:0 cdw10:00000000 cdw11:00000000 00:20:23.624 [2024-07-15 23:46:58.559992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.559998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.560008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c2610 is same [2024-07-15 23:46:58.560025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with twith the state(5) to be set 00:20:23.624 he state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d8fc0 is same with the state(5) to be set 00:20:23.624 [2024-07-15 23:46:58.560128] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.624 [2024-07-15 23:46:58.561520] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.624 [2024-07-15 23:46:58.561685] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.624 [2024-07-15 23:46:58.562486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.562917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.562971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.624 [2024-07-15 23:46:58.563305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.624 [2024-07-15 23:46:58.563330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.563642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.563667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29189f0 is same with the state(5) to be set 00:20:23.625 [2024-07-15 23:46:58.563756] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29189f0 was disconnected and freed. reset controller. 00:20:23.625 [2024-07-15 23:46:58.564324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.625 [2024-07-15 23:46:58.564362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ec450 with addr=10.0.0.2, port=4420 00:20:23.625 [2024-07-15 23:46:58.564389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ec450 is same with the state(5) to be set 00:20:23.625 [2024-07-15 23:46:58.564526] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.625 [2024-07-15 23:46:58.565697] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.625 [2024-07-15 23:46:58.566010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:23.625 [2024-07-15 23:46:58.566057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e3280 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.566094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ec450 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.566300] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.625 [2024-07-15 23:46:58.566525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:23.625 [2024-07-15 23:46:58.566560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:23.625 [2024-07-15 23:46:58.566587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:23.625 [2024-07-15 23:46:58.567088] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.625 [2024-07-15 23:46:58.567190] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:23.625 [2024-07-15 23:46:58.567244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.625 [2024-07-15 23:46:58.567373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.625 [2024-07-15 23:46:58.567407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e3280 with addr=10.0.0.2, port=4420 00:20:23.625 [2024-07-15 23:46:58.567435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e3280 is same with the state(5) to be set 00:20:23.625 [2024-07-15 23:46:58.567566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e3280 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.567668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:23.625 [2024-07-15 23:46:58.567695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:23.625 [2024-07-15 23:46:58.567719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:23.625 [2024-07-15 23:46:58.567805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.625 [2024-07-15 23:46:58.568425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2784990 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.568516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.625 [2024-07-15 23:46:58.568547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.568577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.625 [2024-07-15 23:46:58.568600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.568628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.625 [2024-07-15 23:46:58.568652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.568679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.625 [2024-07-15 23:46:58.568703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.568727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288a350 is same with the state(5) to be set 00:20:23.625 [2024-07-15 23:46:58.568772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2784bb0 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.568820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e2c60 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.568866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c2610 (9): Bad file descriptor 00:20:23.625 [2024-07-15 23:46:58.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.569966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.569993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.625 [2024-07-15 23:46:58.570021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.625 [2024-07-15 23:46:58.570045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.570939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.570982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.571936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.571975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.626 [2024-07-15 23:46:58.572355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.626 [2024-07-15 23:46:58.572385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.572409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.572437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.572461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.572490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.572543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.572567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.572599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2754d70 is same with the state(5) to be set 00:20:23.627 [2024-07-15 23:46:58.574173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.574652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.574678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.582833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.582915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.582944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.583965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.583994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.627 [2024-07-15 23:46:58.584451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.627 [2024-07-15 23:46:58.584480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.584966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.584993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.585855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.585881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x280f2f0 is same with the state(5) to be set 00:20:23.628 [2024-07-15 23:46:58.587533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.587945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.587983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.628 [2024-07-15 23:46:58.588008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.628 [2024-07-15 23:46:58.588035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.588951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.589947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.589985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.629 [2024-07-15 23:46:58.590314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.629 [2024-07-15 23:46:58.590338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.590967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.590993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.591021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291efc0 is same with the state(5) to be set 00:20:23.630 [2024-07-15 23:46:58.592987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.630 [2024-07-15 23:46:58.593032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:23.630 [2024-07-15 23:46:58.593066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:23.630 [2024-07-15 23:46:58.593224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x288a350 (9): Bad file descriptor 00:20:23.630 [2024-07-15 23:46:58.593713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.630 [2024-07-15 23:46:58.593755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c0830 with addr=10.0.0.2, port=4420 00:20:23.630 [2024-07-15 23:46:58.593782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c0830 is same with the state(5) to be set 00:20:23.630 [2024-07-15 23:46:58.593911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.630 [2024-07-15 23:46:58.593947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x288c240 with addr=10.0.0.2, port=4420 00:20:23.630 [2024-07-15 23:46:58.593985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288c240 is same with the state(5) to be set 00:20:23.630 [2024-07-15 23:46:58.594108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.630 [2024-07-15 23:46:58.594142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x275c600 with addr=10.0.0.2, port=4420 00:20:23.630 [2024-07-15 23:46:58.594168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275c600 is same with the state(5) to be set 00:20:23.630 [2024-07-15 23:46:58.594931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.594973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.595966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.595997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.596050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.596077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.596105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.596132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.596159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.596185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.596213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.630 [2024-07-15 23:46:58.596239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.630 [2024-07-15 23:46:58.596266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.596979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.597980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.631 [2024-07-15 23:46:58.598384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.631 [2024-07-15 23:46:58.598408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.598434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bbb50 is same with the state(5) to be set 00:20:23.632 [2024-07-15 23:46:58.599949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.599989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.600947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.600982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.632 [2024-07-15 23:46:58.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.632 [2024-07-15 23:46:58.601785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.601816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.601842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.601871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.601897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.601925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.601949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.601987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.602940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.603438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.603464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2919ec0 is same with the state(5) to be set 00:20:23.633 [2024-07-15 23:46:58.604990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.633 [2024-07-15 23:46:58.605617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.633 [2024-07-15 23:46:58.605651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.605988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.606943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.606997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.634 [2024-07-15 23:46:58.607869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.634 [2024-07-15 23:46:58.607895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.607922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.607949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.608495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.608521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291b390 is same with the state(5) to be set 00:20:23.635 [2024-07-15 23:46:58.610027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.610965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.610992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.635 [2024-07-15 23:46:58.611690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.635 [2024-07-15 23:46:58.611720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.611744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.611774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.611798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.611827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.611851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.611880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.611904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.611939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.611975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.612951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.612990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.636 [2024-07-15 23:46:58.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.636 [2024-07-15 23:46:58.613523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291c860 is same with the state(5) to be set 00:20:23.636 [2024-07-15 23:46:58.615363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:23.637 [2024-07-15 23:46:58.615410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:23.637 [2024-07-15 23:46:58.615445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:23.637 [2024-07-15 23:46:58.615475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:23.637 [2024-07-15 23:46:58.615505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:23.637 [2024-07-15 23:46:58.615605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c0830 (9): Bad file descriptor 00:20:23.637 [2024-07-15 23:46:58.615645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x288c240 (9): Bad file descriptor 00:20:23.637 [2024-07-15 23:46:58.615677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x275c600 (9): Bad file descriptor 00:20:23.637 [2024-07-15 23:46:58.615751] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.637 [2024-07-15 23:46:58.615804] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.637 [2024-07-15 23:46:58.615837] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.637 [2024-07-15 23:46:58.615872] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.637 [2024-07-15 23:46:58.616030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:23.637 [2024-07-15 23:46:58.616314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.637 [2024-07-15 23:46:58.616355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ec450 with addr=10.0.0.2, port=4420 00:20:23.637 [2024-07-15 23:46:58.616383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ec450 is same with the state(5) to be set 00:20:23.637 [2024-07-15 23:46:58.616503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.637 [2024-07-15 23:46:58.616537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e3280 with addr=10.0.0.2, port=4420 00:20:23.637 [2024-07-15 23:46:58.616562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e3280 is same with the state(5) to be set 00:20:23.637 [2024-07-15 23:46:58.616694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.637 [2024-07-15 23:46:58.616727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26e2c60 with addr=10.0.0.2, port=4420 00:20:23.637 [2024-07-15 23:46:58.616754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e2c60 is same with the state(5) to be set 00:20:23.637 [2024-07-15 23:46:58.616860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.637 [2024-07-15 23:46:58.616894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c2610 with addr=10.0.0.2, port=4420 00:20:23.637 [2024-07-15 23:46:58.616918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c2610 is same with the state(5) to be set 00:20:23.637 [2024-07-15 23:46:58.617046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.637 [2024-07-15 23:46:58.617081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2784bb0 with addr=10.0.0.2, port=4420 00:20:23.637 [2024-07-15 23:46:58.617107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2784bb0 is same with the state(5) to be set 00:20:23.637 [2024-07-15 23:46:58.617131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.637 [2024-07-15 23:46:58.617152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.637 [2024-07-15 23:46:58.617178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.637 [2024-07-15 23:46:58.617210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:23.637 [2024-07-15 23:46:58.617234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:23.637 [2024-07-15 23:46:58.617256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:23.637 [2024-07-15 23:46:58.617288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:23.637 [2024-07-15 23:46:58.617311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:23.637 [2024-07-15 23:46:58.617333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:23.637 [2024-07-15 23:46:58.618782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.618816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.618854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.618882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.618910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.618938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.618976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.619982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.620034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.620061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.620116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.637 [2024-07-15 23:46:58.620141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.637 [2024-07-15 23:46:58.620169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.620950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.620988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.621964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.621990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.622043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.622095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.622148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.622202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.638 [2024-07-15 23:46:58.622254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.638 [2024-07-15 23:46:58.622279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291db10 is same with the state(5) to be set 00:20:23.638 [2024-07-15 23:46:58.631517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.638 [2024-07-15 23:46:58.631585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.638 [2024-07-15 23:46:58.631608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.638 task offset: 32384 on job bdev=Nvme3n1 fails 00:20:23.638 00:20:23.638 Latency(us) 00:20:23.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.638 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.638 Job: Nvme1n1 ended in about 0.88 seconds with error 00:20:23.638 Verification LBA range: start 0x0 length 0x400 00:20:23.638 Nvme1n1 : 0.88 145.20 9.08 72.60 0.00 290473.66 20000.62 264085.81 00:20:23.638 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.638 Job: Nvme2n1 ended in about 0.89 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme2n1 : 0.89 143.05 8.94 71.53 0.00 288855.99 28350.39 268746.15 00:20:23.639 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme3n1 ended in about 0.87 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme3n1 : 0.87 227.66 14.23 73.96 0.00 200687.75 7427.41 250104.79 00:20:23.639 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme4n1 ended in about 0.91 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme4n1 : 0.91 217.14 13.57 70.54 0.00 206489.64 27379.48 242337.56 00:20:23.639 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme5n1 ended in about 0.87 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme5n1 : 0.87 219.89 13.74 24.05 0.00 236652.99 17282.09 245444.46 00:20:23.639 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme6n1 ended in about 0.91 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme6n1 : 0.91 145.79 9.11 70.15 0.00 263455.21 18544.26 239230.67 00:20:23.639 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme7n1 ended in about 0.92 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme7n1 : 0.92 144.99 9.06 69.77 0.00 259336.76 37282.70 246997.90 00:20:23.639 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme8n1 ended in about 0.92 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme8n1 : 0.92 144.20 9.01 69.39 0.00 255205.49 17767.54 250104.79 00:20:23.639 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme9n1 ended in about 0.93 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme9n1 : 0.93 141.77 8.86 68.74 0.00 253521.29 25243.50 282727.16 00:20:23.639 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.639 Job: Nvme10n1 ended in about 0.90 seconds with error 00:20:23.639 Verification LBA range: start 0x0 length 0x400 00:20:23.639 Nvme10n1 : 0.90 142.24 8.89 71.12 0.00 242501.15 24660.95 284280.60 00:20:23.639 =================================================================================================================== 00:20:23.639 Total : 1671.93 104.50 661.85 0.00 246628.97 7427.41 284280.60 00:20:23.639 [2024-07-15 23:46:58.659498] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:23.639 [2024-07-15 23:46:58.659574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:23.639 [2024-07-15 23:46:58.659908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.639 [2024-07-15 23:46:58.659948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2784990 with addr=10.0.0.2, port=4420 00:20:23.639 [2024-07-15 23:46:58.659989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2784990 is same with the state(5) to be set 00:20:23.639 [2024-07-15 23:46:58.660029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ec450 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e3280 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26e2c60 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c2610 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2784bb0 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660275] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.660311] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.660344] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.660373] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.660406] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.660440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2784990 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.660793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.639 [2024-07-15 23:46:58.660829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x288a350 with addr=10.0.0.2, port=4420 00:20:23.639 [2024-07-15 23:46:58.660857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288a350 is same with the state(5) to be set 00:20:23.639 [2024-07-15 23:46:58.660885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.660918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.660944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:23.639 [2024-07-15 23:46:58.660983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.661007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.661029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:23.639 [2024-07-15 23:46:58.661059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.661083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.661105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:23.639 [2024-07-15 23:46:58.661133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.661156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.661179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:23.639 [2024-07-15 23:46:58.661205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.661234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.661255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:23.639 [2024-07-15 23:46:58.661305] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661339] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661368] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661398] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661427] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661458] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661493] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661524] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.661554] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:23.639 [2024-07-15 23:46:58.662000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:23.639 [2024-07-15 23:46:58.662036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:23.639 [2024-07-15 23:46:58.662064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.639 [2024-07-15 23:46:58.662112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x288a350 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.662267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.662290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.662312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:23.639 [2024-07-15 23:46:58.662364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.662535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.639 [2024-07-15 23:46:58.662571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x275c600 with addr=10.0.0.2, port=4420 00:20:23.639 [2024-07-15 23:46:58.662596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275c600 is same with the state(5) to be set 00:20:23.639 [2024-07-15 23:46:58.662711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.639 [2024-07-15 23:46:58.662745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x288c240 with addr=10.0.0.2, port=4420 00:20:23.639 [2024-07-15 23:46:58.662772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288c240 is same with the state(5) to be set 00:20:23.639 [2024-07-15 23:46:58.662887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.639 [2024-07-15 23:46:58.662921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c0830 with addr=10.0.0.2, port=4420 00:20:23.639 [2024-07-15 23:46:58.662946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c0830 is same with the state(5) to be set 00:20:23.639 [2024-07-15 23:46:58.663006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:23.639 [2024-07-15 23:46:58.663029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:23.639 [2024-07-15 23:46:58.663052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:23.639 [2024-07-15 23:46:58.663118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.639 [2024-07-15 23:46:58.663152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x275c600 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.663184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x288c240 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.663224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c0830 (9): Bad file descriptor 00:20:23.639 [2024-07-15 23:46:58.663283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:23.640 [2024-07-15 23:46:58.663310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:23.640 [2024-07-15 23:46:58.663334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:23.640 [2024-07-15 23:46:58.663360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:23.640 [2024-07-15 23:46:58.663384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:23.640 [2024-07-15 23:46:58.663406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:23.640 [2024-07-15 23:46:58.663434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.640 [2024-07-15 23:46:58.663457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:23.640 [2024-07-15 23:46:58.663479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.640 [2024-07-15 23:46:58.663540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.640 [2024-07-15 23:46:58.663567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.640 [2024-07-15 23:46:58.663587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.207 23:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:24.208 23:46:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3828114 00:20:25.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3828114) - No such process 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.146 rmmod nvme_tcp 00:20:25.146 rmmod nvme_fabrics 00:20:25.146 rmmod nvme_keyring 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.146 23:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.058 23:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.058 00:20:27.058 real 0m7.151s 00:20:27.058 user 0m16.641s 00:20:27.058 sys 0m1.445s 00:20:27.058 23:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.058 23:47:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:27.058 ************************************ 00:20:27.058 END TEST nvmf_shutdown_tc3 00:20:27.058 ************************************ 00:20:27.316 23:47:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:27.317 23:47:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:27.317 00:20:27.317 real 0m27.213s 00:20:27.317 user 1m15.268s 00:20:27.317 sys 0m6.401s 00:20:27.317 23:47:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.317 23:47:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:27.317 ************************************ 00:20:27.317 END TEST nvmf_shutdown 00:20:27.317 ************************************ 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:27.317 23:47:02 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.317 23:47:02 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.317 23:47:02 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:27.317 23:47:02 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.317 23:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.317 ************************************ 00:20:27.317 START TEST nvmf_multicontroller 00:20:27.317 ************************************ 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:27.317 * Looking for test storage... 00:20:27.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.317 23:47:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.878 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.878 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:29.879 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:29.879 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:29.879 Found net devices under 0000:09:00.0: cvl_0_0 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:29.879 Found net devices under 0000:09:00.1: cvl_0_1 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:29.879 00:20:29.879 --- 10.0.0.2 ping statistics --- 00:20:29.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.879 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:29.879 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:20:29.879 00:20:29.879 --- 10.0.0.1 ping statistics --- 00:20:29.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.880 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3830623 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3830623 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3830623 ']' 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.880 [2024-07-15 23:47:04.688110] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:29.880 [2024-07-15 23:47:04.688211] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.880 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.880 [2024-07-15 23:47:04.752893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.880 [2024-07-15 23:47:04.862420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.880 [2024-07-15 23:47:04.862498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.880 [2024-07-15 23:47:04.862512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.880 [2024-07-15 23:47:04.862523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.880 [2024-07-15 23:47:04.862546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.880 [2024-07-15 23:47:04.862643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.880 [2024-07-15 23:47:04.862707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.880 [2024-07-15 23:47:04.862710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.880 23:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 [2024-07-15 23:47:05.008853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 Malloc0 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 [2024-07-15 23:47:05.075024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 [2024-07-15 23:47:05.082906] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 Malloc1 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.139 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3830655 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3830655 /var/tmp/bdevperf.sock 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3830655 ']' 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.140 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.398 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.398 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:30.398 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:30.398 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.398 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.656 NVMe0n1 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.656 1 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.656 request: 00:20:30.656 { 00:20:30.656 "name": "NVMe0", 00:20:30.656 "trtype": "tcp", 00:20:30.656 "traddr": "10.0.0.2", 00:20:30.656 "adrfam": "ipv4", 00:20:30.656 "trsvcid": "4420", 00:20:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.656 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:30.656 "hostaddr": "10.0.0.2", 00:20:30.656 "hostsvcid": "60000", 00:20:30.656 "prchk_reftag": false, 00:20:30.656 "prchk_guard": false, 00:20:30.656 "hdgst": false, 00:20:30.656 "ddgst": false, 00:20:30.656 "method": "bdev_nvme_attach_controller", 00:20:30.656 "req_id": 1 00:20:30.656 } 00:20:30.656 Got JSON-RPC error response 00:20:30.656 response: 00:20:30.656 { 00:20:30.656 "code": -114, 00:20:30.656 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:30.656 } 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.656 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.656 request: 00:20:30.656 { 00:20:30.656 "name": "NVMe0", 00:20:30.656 "trtype": "tcp", 00:20:30.656 "traddr": "10.0.0.2", 00:20:30.656 "adrfam": "ipv4", 00:20:30.656 "trsvcid": "4420", 00:20:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.657 "hostaddr": "10.0.0.2", 00:20:30.657 "hostsvcid": "60000", 00:20:30.657 "prchk_reftag": false, 00:20:30.657 "prchk_guard": false, 00:20:30.657 "hdgst": false, 00:20:30.657 "ddgst": false, 00:20:30.657 "method": "bdev_nvme_attach_controller", 00:20:30.657 "req_id": 1 00:20:30.657 } 00:20:30.657 Got JSON-RPC error response 00:20:30.657 response: 00:20:30.657 { 00:20:30.657 "code": -114, 00:20:30.657 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:30.657 } 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 request: 00:20:30.657 { 00:20:30.657 "name": "NVMe0", 00:20:30.657 "trtype": "tcp", 00:20:30.657 "traddr": "10.0.0.2", 00:20:30.657 "adrfam": "ipv4", 00:20:30.657 "trsvcid": "4420", 00:20:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.657 "hostaddr": "10.0.0.2", 00:20:30.657 "hostsvcid": "60000", 00:20:30.657 "prchk_reftag": false, 00:20:30.657 "prchk_guard": false, 00:20:30.657 "hdgst": false, 00:20:30.657 "ddgst": false, 00:20:30.657 "multipath": "disable", 00:20:30.657 "method": "bdev_nvme_attach_controller", 00:20:30.657 "req_id": 1 00:20:30.657 } 00:20:30.657 Got JSON-RPC error response 00:20:30.657 response: 00:20:30.657 { 00:20:30.657 "code": -114, 00:20:30.657 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:30.657 } 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 request: 00:20:30.657 { 00:20:30.657 "name": "NVMe0", 00:20:30.657 "trtype": "tcp", 00:20:30.657 "traddr": "10.0.0.2", 00:20:30.657 "adrfam": "ipv4", 00:20:30.657 "trsvcid": "4420", 00:20:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.657 "hostaddr": "10.0.0.2", 00:20:30.657 "hostsvcid": "60000", 00:20:30.657 "prchk_reftag": false, 00:20:30.657 "prchk_guard": false, 00:20:30.657 "hdgst": false, 00:20:30.657 "ddgst": false, 00:20:30.657 "multipath": "failover", 00:20:30.657 "method": "bdev_nvme_attach_controller", 00:20:30.657 "req_id": 1 00:20:30.657 } 00:20:30.657 Got JSON-RPC error response 00:20:30.657 response: 00:20:30.657 { 00:20:30.657 "code": -114, 00:20:30.657 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:30.657 } 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.657 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.915 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:30.915 23:47:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.286 0 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3830655 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3830655 ']' 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3830655 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830655 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830655' 00:20:32.286 killing process with pid 3830655 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3830655 00:20:32.286 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3830655 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:32.544 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:32.544 [2024-07-15 23:47:05.184090] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:32.544 [2024-07-15 23:47:05.184193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830655 ] 00:20:32.544 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.544 [2024-07-15 23:47:05.245145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.544 [2024-07-15 23:47:05.355250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.544 [2024-07-15 23:47:05.973705] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d5faef04-2cb2-431f-8fb3-9b2b15519454 already exists 00:20:32.544 [2024-07-15 23:47:05.973746] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d5faef04-2cb2-431f-8fb3-9b2b15519454 alias for bdev NVMe1n1 00:20:32.544 [2024-07-15 23:47:05.973776] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:32.544 Running I/O for 1 seconds... 00:20:32.544 00:20:32.544 Latency(us) 00:20:32.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.544 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:32.544 NVMe0n1 : 1.01 18812.76 73.49 0.00 0.00 6793.39 4126.34 13107.20 00:20:32.544 =================================================================================================================== 00:20:32.544 Total : 18812.76 73.49 0.00 0.00 6793.39 4126.34 13107.20 00:20:32.544 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.544 00:20:32.544 Latency(us) 00:20:32.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.544 =================================================================================================================== 00:20:32.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.544 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.544 rmmod nvme_tcp 00:20:32.544 rmmod nvme_fabrics 00:20:32.544 rmmod nvme_keyring 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3830623 ']' 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3830623 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3830623 ']' 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3830623 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.544 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830623 00:20:32.545 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.545 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.545 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830623' 00:20:32.545 killing process with pid 3830623 00:20:32.545 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3830623 00:20:32.545 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3830623 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.802 23:47:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.337 23:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.337 00:20:35.337 real 0m7.593s 00:20:35.337 user 0m11.861s 00:20:35.337 sys 0m2.335s 00:20:35.337 23:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.337 23:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.337 ************************************ 00:20:35.337 END TEST nvmf_multicontroller 00:20:35.337 ************************************ 00:20:35.337 23:47:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.337 23:47:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:35.337 23:47:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.337 23:47:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.337 23:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.337 ************************************ 00:20:35.337 START TEST nvmf_aer 00:20:35.337 ************************************ 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:35.337 * Looking for test storage... 00:20:35.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.337 23:47:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.338 23:47:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.338 23:47:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.338 23:47:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.338 23:47:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.238 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:37.239 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:37.239 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:37.239 Found net devices under 0000:09:00.0: cvl_0_0 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:37.239 Found net devices under 0000:09:00.1: cvl_0_1 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:20:37.239 00:20:37.239 --- 10.0.0.2 ping statistics --- 00:20:37.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.239 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:20:37.239 00:20:37.239 --- 10.0.0.1 ping statistics --- 00:20:37.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.239 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3832873 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3832873 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3832873 ']' 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.239 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.239 [2024-07-15 23:47:12.300515] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:37.239 [2024-07-15 23:47:12.300594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.239 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.498 [2024-07-15 23:47:12.365845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.498 [2024-07-15 23:47:12.480685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.498 [2024-07-15 23:47:12.480757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.498 [2024-07-15 23:47:12.480771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.498 [2024-07-15 23:47:12.480797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.498 [2024-07-15 23:47:12.480807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.498 [2024-07-15 23:47:12.480898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.498 [2024-07-15 23:47:12.480978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.498 [2024-07-15 23:47:12.480979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.498 [2024-07-15 23:47:12.480924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.498 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.756 [2024-07-15 23:47:12.624643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.756 Malloc0 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.756 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.757 [2024-07-15 23:47:12.675302] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.757 [ 00:20:37.757 { 00:20:37.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:37.757 "subtype": "Discovery", 00:20:37.757 "listen_addresses": [], 00:20:37.757 "allow_any_host": true, 00:20:37.757 "hosts": [] 00:20:37.757 }, 00:20:37.757 { 00:20:37.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.757 "subtype": "NVMe", 00:20:37.757 "listen_addresses": [ 00:20:37.757 { 00:20:37.757 "trtype": "TCP", 00:20:37.757 "adrfam": "IPv4", 00:20:37.757 "traddr": "10.0.0.2", 00:20:37.757 "trsvcid": "4420" 00:20:37.757 } 00:20:37.757 ], 00:20:37.757 "allow_any_host": true, 00:20:37.757 "hosts": [], 00:20:37.757 "serial_number": "SPDK00000000000001", 00:20:37.757 "model_number": "SPDK bdev Controller", 00:20:37.757 "max_namespaces": 2, 00:20:37.757 "min_cntlid": 1, 00:20:37.757 "max_cntlid": 65519, 00:20:37.757 "namespaces": [ 00:20:37.757 { 00:20:37.757 "nsid": 1, 00:20:37.757 "bdev_name": "Malloc0", 00:20:37.757 "name": "Malloc0", 00:20:37.757 "nguid": "93262D98FD214848B0553574DBE411E7", 00:20:37.757 "uuid": "93262d98-fd21-4848-b055-3574dbe411e7" 00:20:37.757 } 00:20:37.757 ] 00:20:37.757 } 00:20:37.757 ] 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3833014 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:37.757 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:37.757 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:38.015 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:38.015 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:38.015 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:38.015 23:47:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.015 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.015 Malloc1 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.016 [ 00:20:38.016 { 00:20:38.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:38.016 "subtype": "Discovery", 00:20:38.016 "listen_addresses": [], 00:20:38.016 "allow_any_host": true, 00:20:38.016 "hosts": [] 00:20:38.016 }, 00:20:38.016 { 00:20:38.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.016 "subtype": "NVMe", 00:20:38.016 "listen_addresses": [ 00:20:38.016 { 00:20:38.016 "trtype": "TCP", 00:20:38.016 "adrfam": "IPv4", 00:20:38.016 "traddr": "10.0.0.2", 00:20:38.016 "trsvcid": "4420" 00:20:38.016 } 00:20:38.016 ], 00:20:38.016 "allow_any_host": true, 00:20:38.016 "hosts": [], 00:20:38.016 "serial_number": "SPDK00000000000001", 00:20:38.016 "model_number": "SPDK bdev Controller", 00:20:38.016 "max_namespaces": 2, 00:20:38.016 "min_cntlid": 1, 00:20:38.016 "max_cntlid": 65519, 00:20:38.016 "namespaces": [ 00:20:38.016 { 00:20:38.016 "nsid": 1, 00:20:38.016 "bdev_name": "Malloc0", 00:20:38.016 "name": "Malloc0", 00:20:38.016 "nguid": "93262D98FD214848B0553574DBE411E7", 00:20:38.016 "uuid": "93262d98-fd21-4848-b055-3574dbe411e7" 00:20:38.016 }, 00:20:38.016 { 00:20:38.016 "nsid": 2, 00:20:38.016 "bdev_name": "Malloc1", 00:20:38.016 "name": "Malloc1", 00:20:38.016 "nguid": "F5355C35103D4AFEB91803105C70C523", 00:20:38.016 "uuid": "f5355c35-103d-4afe-b918-03105c70c523" 00:20:38.016 } 00:20:38.016 ] 00:20:38.016 } 00:20:38.016 ] 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3833014 00:20:38.016 Asynchronous Event Request test 00:20:38.016 Attaching to 10.0.0.2 00:20:38.016 Attached to 10.0.0.2 00:20:38.016 Registering asynchronous event callbacks... 00:20:38.016 Starting namespace attribute notice tests for all controllers... 00:20:38.016 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:38.016 aer_cb - Changed Namespace 00:20:38.016 Cleaning up... 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.016 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.274 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.275 rmmod nvme_tcp 00:20:38.275 rmmod nvme_fabrics 00:20:38.275 rmmod nvme_keyring 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3832873 ']' 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3832873 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3832873 ']' 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3832873 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3832873 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3832873' 00:20:38.275 killing process with pid 3832873 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3832873 00:20:38.275 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3832873 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.534 23:47:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.437 23:47:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.437 00:20:40.437 real 0m5.628s 00:20:40.437 user 0m4.659s 00:20:40.437 sys 0m1.994s 00:20:40.437 23:47:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.437 23:47:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.437 ************************************ 00:20:40.437 END TEST nvmf_aer 00:20:40.437 ************************************ 00:20:40.695 23:47:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:40.695 23:47:15 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:40.695 23:47:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:40.695 23:47:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.695 23:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:40.695 ************************************ 00:20:40.695 START TEST nvmf_async_init 00:20:40.695 ************************************ 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:40.695 * Looking for test storage... 00:20:40.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.695 23:47:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1aa31135ac094121a41b9aeb0bd031fd 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.696 23:47:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:43.227 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:43.227 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:43.227 Found net devices under 0000:09:00.0: cvl_0_0 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:43.227 Found net devices under 0000:09:00.1: cvl_0_1 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.227 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:43.228 00:20:43.228 --- 10.0.0.2 ping statistics --- 00:20:43.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.228 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:43.228 00:20:43.228 --- 10.0.0.1 ping statistics --- 00:20:43.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.228 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3834955 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3834955 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3834955 ']' 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.228 23:47:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 [2024-07-15 23:47:17.961683] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:43.228 [2024-07-15 23:47:17.961764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.228 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.228 [2024-07-15 23:47:18.024668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.228 [2024-07-15 23:47:18.125029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.228 [2024-07-15 23:47:18.125084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.228 [2024-07-15 23:47:18.125120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.228 [2024-07-15 23:47:18.125131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.228 [2024-07-15 23:47:18.125140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.228 [2024-07-15 23:47:18.125171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 [2024-07-15 23:47:18.260880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 null0 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1aa31135ac094121a41b9aeb0bd031fd 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.228 [2024-07-15 23:47:18.301156] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.228 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.486 nvme0n1 00:20:43.486 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.486 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.486 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.486 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.486 [ 00:20:43.486 { 00:20:43.486 "name": "nvme0n1", 00:20:43.486 "aliases": [ 00:20:43.486 "1aa31135-ac09-4121-a41b-9aeb0bd031fd" 00:20:43.486 ], 00:20:43.486 "product_name": "NVMe disk", 00:20:43.486 "block_size": 512, 00:20:43.486 "num_blocks": 2097152, 00:20:43.486 "uuid": "1aa31135-ac09-4121-a41b-9aeb0bd031fd", 00:20:43.486 "assigned_rate_limits": { 00:20:43.486 "rw_ios_per_sec": 0, 00:20:43.486 "rw_mbytes_per_sec": 0, 00:20:43.486 "r_mbytes_per_sec": 0, 00:20:43.486 "w_mbytes_per_sec": 0 00:20:43.486 }, 00:20:43.486 "claimed": false, 00:20:43.486 "zoned": false, 00:20:43.486 "supported_io_types": { 00:20:43.486 "read": true, 00:20:43.486 "write": true, 00:20:43.486 "unmap": false, 00:20:43.486 "flush": true, 00:20:43.486 "reset": true, 00:20:43.486 "nvme_admin": true, 00:20:43.486 "nvme_io": true, 00:20:43.486 "nvme_io_md": false, 00:20:43.486 "write_zeroes": true, 00:20:43.486 "zcopy": false, 00:20:43.486 "get_zone_info": false, 00:20:43.486 "zone_management": false, 00:20:43.486 "zone_append": false, 00:20:43.486 "compare": true, 00:20:43.486 "compare_and_write": true, 00:20:43.486 "abort": true, 00:20:43.486 "seek_hole": false, 00:20:43.486 "seek_data": false, 00:20:43.486 "copy": true, 00:20:43.486 "nvme_iov_md": false 00:20:43.486 }, 00:20:43.486 "memory_domains": [ 00:20:43.486 { 00:20:43.486 "dma_device_id": "system", 00:20:43.486 "dma_device_type": 1 00:20:43.486 } 00:20:43.486 ], 00:20:43.486 "driver_specific": { 00:20:43.486 "nvme": [ 00:20:43.486 { 00:20:43.486 "trid": { 00:20:43.486 "trtype": "TCP", 00:20:43.486 "adrfam": "IPv4", 00:20:43.486 "traddr": "10.0.0.2", 00:20:43.486 "trsvcid": "4420", 00:20:43.486 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.486 }, 00:20:43.486 "ctrlr_data": { 00:20:43.486 "cntlid": 1, 00:20:43.486 "vendor_id": "0x8086", 00:20:43.486 "model_number": "SPDK bdev Controller", 00:20:43.486 "serial_number": "00000000000000000000", 00:20:43.486 "firmware_revision": "24.09", 00:20:43.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.486 "oacs": { 00:20:43.486 "security": 0, 00:20:43.486 "format": 0, 00:20:43.486 "firmware": 0, 00:20:43.486 "ns_manage": 0 00:20:43.486 }, 00:20:43.486 "multi_ctrlr": true, 00:20:43.486 "ana_reporting": false 00:20:43.486 }, 00:20:43.486 "vs": { 00:20:43.486 "nvme_version": "1.3" 00:20:43.486 }, 00:20:43.486 "ns_data": { 00:20:43.486 "id": 1, 00:20:43.486 "can_share": true 00:20:43.486 } 00:20:43.486 } 00:20:43.486 ], 00:20:43.486 "mp_policy": "active_passive" 00:20:43.486 } 00:20:43.486 } 00:20:43.486 ] 00:20:43.487 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.487 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:43.487 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.487 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.487 [2024-07-15 23:47:18.549799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:43.487 [2024-07-15 23:47:18.549885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d4090 (9): Bad file descriptor 00:20:43.745 [2024-07-15 23:47:18.682080] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.745 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.745 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.745 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.745 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.745 [ 00:20:43.745 { 00:20:43.745 "name": "nvme0n1", 00:20:43.745 "aliases": [ 00:20:43.745 "1aa31135-ac09-4121-a41b-9aeb0bd031fd" 00:20:43.745 ], 00:20:43.745 "product_name": "NVMe disk", 00:20:43.745 "block_size": 512, 00:20:43.745 "num_blocks": 2097152, 00:20:43.745 "uuid": "1aa31135-ac09-4121-a41b-9aeb0bd031fd", 00:20:43.745 "assigned_rate_limits": { 00:20:43.745 "rw_ios_per_sec": 0, 00:20:43.745 "rw_mbytes_per_sec": 0, 00:20:43.745 "r_mbytes_per_sec": 0, 00:20:43.745 "w_mbytes_per_sec": 0 00:20:43.745 }, 00:20:43.745 "claimed": false, 00:20:43.745 "zoned": false, 00:20:43.745 "supported_io_types": { 00:20:43.745 "read": true, 00:20:43.745 "write": true, 00:20:43.745 "unmap": false, 00:20:43.745 "flush": true, 00:20:43.745 "reset": true, 00:20:43.745 "nvme_admin": true, 00:20:43.745 "nvme_io": true, 00:20:43.745 "nvme_io_md": false, 00:20:43.745 "write_zeroes": true, 00:20:43.745 "zcopy": false, 00:20:43.745 "get_zone_info": false, 00:20:43.745 "zone_management": false, 00:20:43.745 "zone_append": false, 00:20:43.745 "compare": true, 00:20:43.745 "compare_and_write": true, 00:20:43.746 "abort": true, 00:20:43.746 "seek_hole": false, 00:20:43.746 "seek_data": false, 00:20:43.746 "copy": true, 00:20:43.746 "nvme_iov_md": false 00:20:43.746 }, 00:20:43.746 "memory_domains": [ 00:20:43.746 { 00:20:43.746 "dma_device_id": "system", 00:20:43.746 "dma_device_type": 1 00:20:43.746 } 00:20:43.746 ], 00:20:43.746 "driver_specific": { 00:20:43.746 "nvme": [ 00:20:43.746 { 00:20:43.746 "trid": { 00:20:43.746 "trtype": "TCP", 00:20:43.746 "adrfam": "IPv4", 00:20:43.746 "traddr": "10.0.0.2", 00:20:43.746 "trsvcid": "4420", 00:20:43.746 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.746 }, 00:20:43.746 "ctrlr_data": { 00:20:43.746 "cntlid": 2, 00:20:43.746 "vendor_id": "0x8086", 00:20:43.746 "model_number": "SPDK bdev Controller", 00:20:43.746 "serial_number": "00000000000000000000", 00:20:43.746 "firmware_revision": "24.09", 00:20:43.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.746 "oacs": { 00:20:43.746 "security": 0, 00:20:43.746 "format": 0, 00:20:43.746 "firmware": 0, 00:20:43.746 "ns_manage": 0 00:20:43.746 }, 00:20:43.746 "multi_ctrlr": true, 00:20:43.746 "ana_reporting": false 00:20:43.746 }, 00:20:43.746 "vs": { 00:20:43.746 "nvme_version": "1.3" 00:20:43.746 }, 00:20:43.746 "ns_data": { 00:20:43.746 "id": 1, 00:20:43.746 "can_share": true 00:20:43.746 } 00:20:43.746 } 00:20:43.746 ], 00:20:43.746 "mp_policy": "active_passive" 00:20:43.746 } 00:20:43.746 } 00:20:43.746 ] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.C96w9PygTJ 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.C96w9PygTJ 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 [2024-07-15 23:47:18.726376] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.746 [2024-07-15 23:47:18.726487] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C96w9PygTJ 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 [2024-07-15 23:47:18.734400] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C96w9PygTJ 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 [2024-07-15 23:47:18.742428] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.746 [2024-07-15 23:47:18.742488] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.746 nvme0n1 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 [ 00:20:43.746 { 00:20:43.746 "name": "nvme0n1", 00:20:43.746 "aliases": [ 00:20:43.746 "1aa31135-ac09-4121-a41b-9aeb0bd031fd" 00:20:43.746 ], 00:20:43.746 "product_name": "NVMe disk", 00:20:43.746 "block_size": 512, 00:20:43.746 "num_blocks": 2097152, 00:20:43.746 "uuid": "1aa31135-ac09-4121-a41b-9aeb0bd031fd", 00:20:43.746 "assigned_rate_limits": { 00:20:43.746 "rw_ios_per_sec": 0, 00:20:43.746 "rw_mbytes_per_sec": 0, 00:20:43.746 "r_mbytes_per_sec": 0, 00:20:43.746 "w_mbytes_per_sec": 0 00:20:43.746 }, 00:20:43.746 "claimed": false, 00:20:43.746 "zoned": false, 00:20:43.746 "supported_io_types": { 00:20:43.746 "read": true, 00:20:43.746 "write": true, 00:20:43.746 "unmap": false, 00:20:43.746 "flush": true, 00:20:43.746 "reset": true, 00:20:43.746 "nvme_admin": true, 00:20:43.746 "nvme_io": true, 00:20:43.746 "nvme_io_md": false, 00:20:43.746 "write_zeroes": true, 00:20:43.746 "zcopy": false, 00:20:43.746 "get_zone_info": false, 00:20:43.746 "zone_management": false, 00:20:43.746 "zone_append": false, 00:20:43.746 "compare": true, 00:20:43.746 "compare_and_write": true, 00:20:43.746 "abort": true, 00:20:43.746 "seek_hole": false, 00:20:43.746 "seek_data": false, 00:20:43.746 "copy": true, 00:20:43.746 "nvme_iov_md": false 00:20:43.746 }, 00:20:43.746 "memory_domains": [ 00:20:43.746 { 00:20:43.746 "dma_device_id": "system", 00:20:43.746 "dma_device_type": 1 00:20:43.746 } 00:20:43.746 ], 00:20:43.746 "driver_specific": { 00:20:43.746 "nvme": [ 00:20:43.746 { 00:20:43.746 "trid": { 00:20:43.746 "trtype": "TCP", 00:20:43.746 "adrfam": "IPv4", 00:20:43.746 "traddr": "10.0.0.2", 00:20:43.746 "trsvcid": "4421", 00:20:43.746 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.746 }, 00:20:43.746 "ctrlr_data": { 00:20:43.746 "cntlid": 3, 00:20:43.746 "vendor_id": "0x8086", 00:20:43.746 "model_number": "SPDK bdev Controller", 00:20:43.746 "serial_number": "00000000000000000000", 00:20:43.746 "firmware_revision": "24.09", 00:20:43.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.746 "oacs": { 00:20:43.746 "security": 0, 00:20:43.746 "format": 0, 00:20:43.746 "firmware": 0, 00:20:43.746 "ns_manage": 0 00:20:43.746 }, 00:20:43.746 "multi_ctrlr": true, 00:20:43.746 "ana_reporting": false 00:20:43.746 }, 00:20:43.746 "vs": { 00:20:43.746 "nvme_version": "1.3" 00:20:43.746 }, 00:20:43.746 "ns_data": { 00:20:43.746 "id": 1, 00:20:43.746 "can_share": true 00:20:43.746 } 00:20:43.746 } 00:20:43.746 ], 00:20:43.746 "mp_policy": "active_passive" 00:20:43.746 } 00:20:43.746 } 00:20:43.746 ] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.C96w9PygTJ 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.746 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.746 rmmod nvme_tcp 00:20:43.746 rmmod nvme_fabrics 00:20:44.004 rmmod nvme_keyring 00:20:44.004 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.004 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:44.004 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3834955 ']' 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3834955 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3834955 ']' 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3834955 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3834955 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3834955' 00:20:44.005 killing process with pid 3834955 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3834955 00:20:44.005 [2024-07-15 23:47:18.923029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.005 [2024-07-15 23:47:18.923067] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.005 23:47:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3834955 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.263 23:47:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.172 23:47:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:46.172 00:20:46.172 real 0m5.605s 00:20:46.172 user 0m2.083s 00:20:46.172 sys 0m1.881s 00:20:46.172 23:47:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.172 23:47:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:46.172 ************************************ 00:20:46.172 END TEST nvmf_async_init 00:20:46.172 ************************************ 00:20:46.172 23:47:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:46.172 23:47:21 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.172 23:47:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.172 23:47:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.172 23:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.172 ************************************ 00:20:46.172 START TEST dma 00:20:46.172 ************************************ 00:20:46.172 23:47:21 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.431 * Looking for test storage... 00:20:46.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.431 23:47:21 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.431 23:47:21 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.431 23:47:21 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.431 23:47:21 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.431 23:47:21 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.431 23:47:21 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.431 23:47:21 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.431 23:47:21 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:46.431 23:47:21 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:46.431 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.432 23:47:21 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.432 23:47:21 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:46.432 23:47:21 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:46.432 00:20:46.432 real 0m0.068s 00:20:46.432 user 0m0.029s 00:20:46.432 sys 0m0.044s 00:20:46.432 23:47:21 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.432 23:47:21 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:46.432 ************************************ 00:20:46.432 END TEST dma 00:20:46.432 ************************************ 00:20:46.432 23:47:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:46.432 23:47:21 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.432 23:47:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.432 23:47:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.432 23:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.432 ************************************ 00:20:46.432 START TEST nvmf_identify 00:20:46.432 ************************************ 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.432 * Looking for test storage... 00:20:46.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.432 23:47:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:48.962 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:48.962 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:48.962 Found net devices under 0000:09:00.0: cvl_0_0 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:48.962 Found net devices under 0000:09:00.1: cvl_0_1 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.962 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:20:48.963 00:20:48.963 --- 10.0.0.2 ping statistics --- 00:20:48.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.963 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:20:48.963 00:20:48.963 --- 10.0.0.1 ping statistics --- 00:20:48.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.963 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3837083 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3837083 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3837083 ']' 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.963 23:47:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 [2024-07-15 23:47:23.704155] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:48.963 [2024-07-15 23:47:23.704253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.963 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.963 [2024-07-15 23:47:23.769317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.963 [2024-07-15 23:47:23.882054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.963 [2024-07-15 23:47:23.882116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.963 [2024-07-15 23:47:23.882145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.963 [2024-07-15 23:47:23.882156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.963 [2024-07-15 23:47:23.882166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.963 [2024-07-15 23:47:23.882217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.963 [2024-07-15 23:47:23.882275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.963 [2024-07-15 23:47:23.882340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.963 [2024-07-15 23:47:23.882342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 [2024-07-15 23:47:24.015618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 Malloc0 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.963 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 [2024-07-15 23:47:24.091061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 [ 00:20:49.222 { 00:20:49.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:49.222 "subtype": "Discovery", 00:20:49.222 "listen_addresses": [ 00:20:49.222 { 00:20:49.222 "trtype": "TCP", 00:20:49.222 "adrfam": "IPv4", 00:20:49.222 "traddr": "10.0.0.2", 00:20:49.222 "trsvcid": "4420" 00:20:49.222 } 00:20:49.222 ], 00:20:49.222 "allow_any_host": true, 00:20:49.222 "hosts": [] 00:20:49.222 }, 00:20:49.222 { 00:20:49.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.222 "subtype": "NVMe", 00:20:49.222 "listen_addresses": [ 00:20:49.222 { 00:20:49.222 "trtype": "TCP", 00:20:49.222 "adrfam": "IPv4", 00:20:49.222 "traddr": "10.0.0.2", 00:20:49.222 "trsvcid": "4420" 00:20:49.222 } 00:20:49.222 ], 00:20:49.222 "allow_any_host": true, 00:20:49.222 "hosts": [], 00:20:49.222 "serial_number": "SPDK00000000000001", 00:20:49.222 "model_number": "SPDK bdev Controller", 00:20:49.222 "max_namespaces": 32, 00:20:49.222 "min_cntlid": 1, 00:20:49.222 "max_cntlid": 65519, 00:20:49.222 "namespaces": [ 00:20:49.222 { 00:20:49.222 "nsid": 1, 00:20:49.222 "bdev_name": "Malloc0", 00:20:49.222 "name": "Malloc0", 00:20:49.222 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:49.222 "eui64": "ABCDEF0123456789", 00:20:49.222 "uuid": "6634b74d-84fb-485d-8500-251d7a806324" 00:20:49.222 } 00:20:49.222 ] 00:20:49.222 } 00:20:49.222 ] 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.222 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:49.222 [2024-07-15 23:47:24.132405] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:49.222 [2024-07-15 23:47:24.132450] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837222 ] 00:20:49.222 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.222 [2024-07-15 23:47:24.166181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:49.222 [2024-07-15 23:47:24.166244] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:49.222 [2024-07-15 23:47:24.166254] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:49.222 [2024-07-15 23:47:24.166269] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:49.222 [2024-07-15 23:47:24.166279] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:49.222 [2024-07-15 23:47:24.166547] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:49.223 [2024-07-15 23:47:24.166595] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1505540 0 00:20:49.223 [2024-07-15 23:47:24.172993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:49.223 [2024-07-15 23:47:24.173013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:49.223 [2024-07-15 23:47:24.173025] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:49.223 [2024-07-15 23:47:24.173047] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:49.223 [2024-07-15 23:47:24.173101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.173114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.173121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.173138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:49.223 [2024-07-15 23:47:24.173165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.179969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.179987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.179995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.180024] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:49.223 [2024-07-15 23:47:24.180036] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:49.223 [2024-07-15 23:47:24.180046] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:49.223 [2024-07-15 23:47:24.180067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.180095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.180119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.180250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.180263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.180270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.180286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:49.223 [2024-07-15 23:47:24.180298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:49.223 [2024-07-15 23:47:24.180311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.180337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.180358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.180443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.180455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.180462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.180478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:49.223 [2024-07-15 23:47:24.180497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.180510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.180535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.180556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.180647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.180662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.180669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.180685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.180702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.180729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.180749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.180835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.180847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.180854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.180861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.180869] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:49.223 [2024-07-15 23:47:24.180878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.180891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.181001] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:49.223 [2024-07-15 23:47:24.181011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.181025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.181050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.181072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.181201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.181213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.181221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.181241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:49.223 [2024-07-15 23:47:24.181257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.181283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.181305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.181403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.181418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.181425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.181440] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:49.223 [2024-07-15 23:47:24.181449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:49.223 [2024-07-15 23:47:24.181462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:49.223 [2024-07-15 23:47:24.181483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:49.223 [2024-07-15 23:47:24.181499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.223 [2024-07-15 23:47:24.181518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.223 [2024-07-15 23:47:24.181539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.223 [2024-07-15 23:47:24.181667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.223 [2024-07-15 23:47:24.181682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.223 [2024-07-15 23:47:24.181690] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181697] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1505540): datao=0, datal=4096, cccid=0 00:20:49.223 [2024-07-15 23:47:24.181705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15653c0) on tqpair(0x1505540): expected_datao=0, payload_size=4096 00:20:49.223 [2024-07-15 23:47:24.181713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181731] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.181741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.222060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.223 [2024-07-15 23:47:24.222080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.223 [2024-07-15 23:47:24.222088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.223 [2024-07-15 23:47:24.222095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.223 [2024-07-15 23:47:24.222108] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:49.223 [2024-07-15 23:47:24.222122] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:49.223 [2024-07-15 23:47:24.222135] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:49.223 [2024-07-15 23:47:24.222144] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:49.223 [2024-07-15 23:47:24.222153] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:49.224 [2024-07-15 23:47:24.222161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:49.224 [2024-07-15 23:47:24.222176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:49.224 [2024-07-15 23:47:24.222189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:49.224 [2024-07-15 23:47:24.222238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.224 [2024-07-15 23:47:24.222333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.222345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.222352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.222371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.224 [2024-07-15 23:47:24.222405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.224 [2024-07-15 23:47:24.222437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.224 [2024-07-15 23:47:24.222469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.224 [2024-07-15 23:47:24.222500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:49.224 [2024-07-15 23:47:24.222519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:49.224 [2024-07-15 23:47:24.222535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.224 [2024-07-15 23:47:24.222591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15653c0, cid 0, qid 0 00:20:49.224 [2024-07-15 23:47:24.222603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565540, cid 1, qid 0 00:20:49.224 [2024-07-15 23:47:24.222611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15656c0, cid 2, qid 0 00:20:49.224 [2024-07-15 23:47:24.222618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565840, cid 3, qid 0 00:20:49.224 [2024-07-15 23:47:24.222626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15659c0, cid 4, qid 0 00:20:49.224 [2024-07-15 23:47:24.222829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.222845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.222852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15659c0) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.222868] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:49.224 [2024-07-15 23:47:24.222877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:49.224 [2024-07-15 23:47:24.222896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.222906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.222917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.224 [2024-07-15 23:47:24.222938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15659c0, cid 4, qid 0 00:20:49.224 [2024-07-15 23:47:24.223044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.224 [2024-07-15 23:47:24.223059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.224 [2024-07-15 23:47:24.223067] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223074] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1505540): datao=0, datal=4096, cccid=4 00:20:49.224 [2024-07-15 23:47:24.223081] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15659c0) on tqpair(0x1505540): expected_datao=0, payload_size=4096 00:20:49.224 [2024-07-15 23:47:24.223089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223106] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223115] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.223171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.223179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15659c0) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.223204] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:49.224 [2024-07-15 23:47:24.223239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.223261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.224 [2024-07-15 23:47:24.223277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.223301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.224 [2024-07-15 23:47:24.223328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15659c0, cid 4, qid 0 00:20:49.224 [2024-07-15 23:47:24.223340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565b40, cid 5, qid 0 00:20:49.224 [2024-07-15 23:47:24.223468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.224 [2024-07-15 23:47:24.223483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.224 [2024-07-15 23:47:24.223490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1505540): datao=0, datal=1024, cccid=4 00:20:49.224 [2024-07-15 23:47:24.223505] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15659c0) on tqpair(0x1505540): expected_datao=0, payload_size=1024 00:20:49.224 [2024-07-15 23:47:24.223512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223530] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.223547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.223554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.223561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565b40) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.267986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.268005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.268012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.268019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15659c0) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.268037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.268047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.268058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.224 [2024-07-15 23:47:24.268103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15659c0, cid 4, qid 0 00:20:49.224 [2024-07-15 23:47:24.268249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.224 [2024-07-15 23:47:24.268262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.224 [2024-07-15 23:47:24.268270] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.268276] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1505540): datao=0, datal=3072, cccid=4 00:20:49.224 [2024-07-15 23:47:24.268284] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15659c0) on tqpair(0x1505540): expected_datao=0, payload_size=3072 00:20:49.224 [2024-07-15 23:47:24.268291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.268311] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.268320] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.309064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.224 [2024-07-15 23:47:24.309084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.224 [2024-07-15 23:47:24.309092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.309099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15659c0) on tqpair=0x1505540 00:20:49.224 [2024-07-15 23:47:24.309120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.309130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1505540) 00:20:49.224 [2024-07-15 23:47:24.309142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.224 [2024-07-15 23:47:24.309172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15659c0, cid 4, qid 0 00:20:49.224 [2024-07-15 23:47:24.309291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.224 [2024-07-15 23:47:24.309306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.224 [2024-07-15 23:47:24.309313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.224 [2024-07-15 23:47:24.309320] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1505540): datao=0, datal=8, cccid=4 00:20:49.224 [2024-07-15 23:47:24.309328] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15659c0) on tqpair(0x1505540): expected_datao=0, payload_size=8 00:20:49.225 [2024-07-15 23:47:24.309336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.225 [2024-07-15 23:47:24.309346] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.225 [2024-07-15 23:47:24.309354] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.487 [2024-07-15 23:47:24.350074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.487 [2024-07-15 23:47:24.350095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.487 [2024-07-15 23:47:24.350106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.487 [2024-07-15 23:47:24.350119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15659c0) on tqpair=0x1505540 00:20:49.487 ===================================================== 00:20:49.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:49.487 ===================================================== 00:20:49.487 Controller Capabilities/Features 00:20:49.487 ================================ 00:20:49.487 Vendor ID: 0000 00:20:49.487 Subsystem Vendor ID: 0000 00:20:49.487 Serial Number: .................... 00:20:49.487 Model Number: ........................................ 00:20:49.487 Firmware Version: 24.09 00:20:49.487 Recommended Arb Burst: 0 00:20:49.487 IEEE OUI Identifier: 00 00 00 00:20:49.487 Multi-path I/O 00:20:49.487 May have multiple subsystem ports: No 00:20:49.487 May have multiple controllers: No 00:20:49.487 Associated with SR-IOV VF: No 00:20:49.487 Max Data Transfer Size: 131072 00:20:49.487 Max Number of Namespaces: 0 00:20:49.487 Max Number of I/O Queues: 1024 00:20:49.487 NVMe Specification Version (VS): 1.3 00:20:49.487 NVMe Specification Version (Identify): 1.3 00:20:49.487 Maximum Queue Entries: 128 00:20:49.487 Contiguous Queues Required: Yes 00:20:49.487 Arbitration Mechanisms Supported 00:20:49.487 Weighted Round Robin: Not Supported 00:20:49.487 Vendor Specific: Not Supported 00:20:49.487 Reset Timeout: 15000 ms 00:20:49.487 Doorbell Stride: 4 bytes 00:20:49.487 NVM Subsystem Reset: Not Supported 00:20:49.487 Command Sets Supported 00:20:49.487 NVM Command Set: Supported 00:20:49.487 Boot Partition: Not Supported 00:20:49.487 Memory Page Size Minimum: 4096 bytes 00:20:49.487 Memory Page Size Maximum: 4096 bytes 00:20:49.487 Persistent Memory Region: Not Supported 00:20:49.487 Optional Asynchronous Events Supported 00:20:49.487 Namespace Attribute Notices: Not Supported 00:20:49.487 Firmware Activation Notices: Not Supported 00:20:49.487 ANA Change Notices: Not Supported 00:20:49.487 PLE Aggregate Log Change Notices: Not Supported 00:20:49.487 LBA Status Info Alert Notices: Not Supported 00:20:49.487 EGE Aggregate Log Change Notices: Not Supported 00:20:49.487 Normal NVM Subsystem Shutdown event: Not Supported 00:20:49.487 Zone Descriptor Change Notices: Not Supported 00:20:49.487 Discovery Log Change Notices: Supported 00:20:49.487 Controller Attributes 00:20:49.487 128-bit Host Identifier: Not Supported 00:20:49.487 Non-Operational Permissive Mode: Not Supported 00:20:49.487 NVM Sets: Not Supported 00:20:49.487 Read Recovery Levels: Not Supported 00:20:49.487 Endurance Groups: Not Supported 00:20:49.487 Predictable Latency Mode: Not Supported 00:20:49.487 Traffic Based Keep ALive: Not Supported 00:20:49.487 Namespace Granularity: Not Supported 00:20:49.487 SQ Associations: Not Supported 00:20:49.487 UUID List: Not Supported 00:20:49.487 Multi-Domain Subsystem: Not Supported 00:20:49.487 Fixed Capacity Management: Not Supported 00:20:49.487 Variable Capacity Management: Not Supported 00:20:49.487 Delete Endurance Group: Not Supported 00:20:49.487 Delete NVM Set: Not Supported 00:20:49.487 Extended LBA Formats Supported: Not Supported 00:20:49.487 Flexible Data Placement Supported: Not Supported 00:20:49.487 00:20:49.487 Controller Memory Buffer Support 00:20:49.487 ================================ 00:20:49.487 Supported: No 00:20:49.487 00:20:49.487 Persistent Memory Region Support 00:20:49.487 ================================ 00:20:49.487 Supported: No 00:20:49.487 00:20:49.487 Admin Command Set Attributes 00:20:49.487 ============================ 00:20:49.487 Security Send/Receive: Not Supported 00:20:49.487 Format NVM: Not Supported 00:20:49.487 Firmware Activate/Download: Not Supported 00:20:49.487 Namespace Management: Not Supported 00:20:49.487 Device Self-Test: Not Supported 00:20:49.487 Directives: Not Supported 00:20:49.487 NVMe-MI: Not Supported 00:20:49.487 Virtualization Management: Not Supported 00:20:49.487 Doorbell Buffer Config: Not Supported 00:20:49.487 Get LBA Status Capability: Not Supported 00:20:49.487 Command & Feature Lockdown Capability: Not Supported 00:20:49.487 Abort Command Limit: 1 00:20:49.487 Async Event Request Limit: 4 00:20:49.487 Number of Firmware Slots: N/A 00:20:49.487 Firmware Slot 1 Read-Only: N/A 00:20:49.487 Firmware Activation Without Reset: N/A 00:20:49.487 Multiple Update Detection Support: N/A 00:20:49.487 Firmware Update Granularity: No Information Provided 00:20:49.487 Per-Namespace SMART Log: No 00:20:49.487 Asymmetric Namespace Access Log Page: Not Supported 00:20:49.487 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:49.487 Command Effects Log Page: Not Supported 00:20:49.487 Get Log Page Extended Data: Supported 00:20:49.487 Telemetry Log Pages: Not Supported 00:20:49.487 Persistent Event Log Pages: Not Supported 00:20:49.487 Supported Log Pages Log Page: May Support 00:20:49.487 Commands Supported & Effects Log Page: Not Supported 00:20:49.487 Feature Identifiers & Effects Log Page:May Support 00:20:49.487 NVMe-MI Commands & Effects Log Page: May Support 00:20:49.487 Data Area 4 for Telemetry Log: Not Supported 00:20:49.487 Error Log Page Entries Supported: 128 00:20:49.487 Keep Alive: Not Supported 00:20:49.487 00:20:49.488 NVM Command Set Attributes 00:20:49.488 ========================== 00:20:49.488 Submission Queue Entry Size 00:20:49.488 Max: 1 00:20:49.488 Min: 1 00:20:49.488 Completion Queue Entry Size 00:20:49.488 Max: 1 00:20:49.488 Min: 1 00:20:49.488 Number of Namespaces: 0 00:20:49.488 Compare Command: Not Supported 00:20:49.488 Write Uncorrectable Command: Not Supported 00:20:49.488 Dataset Management Command: Not Supported 00:20:49.488 Write Zeroes Command: Not Supported 00:20:49.488 Set Features Save Field: Not Supported 00:20:49.488 Reservations: Not Supported 00:20:49.488 Timestamp: Not Supported 00:20:49.488 Copy: Not Supported 00:20:49.488 Volatile Write Cache: Not Present 00:20:49.488 Atomic Write Unit (Normal): 1 00:20:49.488 Atomic Write Unit (PFail): 1 00:20:49.488 Atomic Compare & Write Unit: 1 00:20:49.488 Fused Compare & Write: Supported 00:20:49.488 Scatter-Gather List 00:20:49.488 SGL Command Set: Supported 00:20:49.488 SGL Keyed: Supported 00:20:49.488 SGL Bit Bucket Descriptor: Not Supported 00:20:49.488 SGL Metadata Pointer: Not Supported 00:20:49.488 Oversized SGL: Not Supported 00:20:49.488 SGL Metadata Address: Not Supported 00:20:49.488 SGL Offset: Supported 00:20:49.488 Transport SGL Data Block: Not Supported 00:20:49.488 Replay Protected Memory Block: Not Supported 00:20:49.488 00:20:49.488 Firmware Slot Information 00:20:49.488 ========================= 00:20:49.488 Active slot: 0 00:20:49.488 00:20:49.488 00:20:49.488 Error Log 00:20:49.488 ========= 00:20:49.488 00:20:49.488 Active Namespaces 00:20:49.488 ================= 00:20:49.488 Discovery Log Page 00:20:49.488 ================== 00:20:49.488 Generation Counter: 2 00:20:49.488 Number of Records: 2 00:20:49.488 Record Format: 0 00:20:49.488 00:20:49.488 Discovery Log Entry 0 00:20:49.488 ---------------------- 00:20:49.488 Transport Type: 3 (TCP) 00:20:49.488 Address Family: 1 (IPv4) 00:20:49.488 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:49.488 Entry Flags: 00:20:49.488 Duplicate Returned Information: 1 00:20:49.488 Explicit Persistent Connection Support for Discovery: 1 00:20:49.488 Transport Requirements: 00:20:49.488 Secure Channel: Not Required 00:20:49.488 Port ID: 0 (0x0000) 00:20:49.488 Controller ID: 65535 (0xffff) 00:20:49.488 Admin Max SQ Size: 128 00:20:49.488 Transport Service Identifier: 4420 00:20:49.488 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:49.488 Transport Address: 10.0.0.2 00:20:49.488 Discovery Log Entry 1 00:20:49.488 ---------------------- 00:20:49.488 Transport Type: 3 (TCP) 00:20:49.488 Address Family: 1 (IPv4) 00:20:49.488 Subsystem Type: 2 (NVM Subsystem) 00:20:49.488 Entry Flags: 00:20:49.488 Duplicate Returned Information: 0 00:20:49.488 Explicit Persistent Connection Support for Discovery: 0 00:20:49.488 Transport Requirements: 00:20:49.488 Secure Channel: Not Required 00:20:49.488 Port ID: 0 (0x0000) 00:20:49.488 Controller ID: 65535 (0xffff) 00:20:49.488 Admin Max SQ Size: 128 00:20:49.488 Transport Service Identifier: 4420 00:20:49.488 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:49.488 Transport Address: 10.0.0.2 [2024-07-15 23:47:24.350252] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:49.488 [2024-07-15 23:47:24.350276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15653c0) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.488 [2024-07-15 23:47:24.350297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565540) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.488 [2024-07-15 23:47:24.350314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15656c0) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.488 [2024-07-15 23:47:24.350330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565840) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.488 [2024-07-15 23:47:24.350355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1505540) 00:20:49.488 [2024-07-15 23:47:24.350383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.488 [2024-07-15 23:47:24.350422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565840, cid 3, qid 0 00:20:49.488 [2024-07-15 23:47:24.350578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.488 [2024-07-15 23:47:24.350594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.488 [2024-07-15 23:47:24.350601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565840) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1505540) 00:20:49.488 [2024-07-15 23:47:24.350651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.488 [2024-07-15 23:47:24.350678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565840, cid 3, qid 0 00:20:49.488 [2024-07-15 23:47:24.350780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.488 [2024-07-15 23:47:24.350797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.488 [2024-07-15 23:47:24.350804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565840) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.350820] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:49.488 [2024-07-15 23:47:24.350829] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:49.488 [2024-07-15 23:47:24.350846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.350862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1505540) 00:20:49.488 [2024-07-15 23:47:24.350873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.488 [2024-07-15 23:47:24.350894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565840, cid 3, qid 0 00:20:49.488 [2024-07-15 23:47:24.354973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.488 [2024-07-15 23:47:24.354991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.488 [2024-07-15 23:47:24.355015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.355022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565840) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.355041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.355051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.355058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1505540) 00:20:49.488 [2024-07-15 23:47:24.355070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.488 [2024-07-15 23:47:24.355094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565840, cid 3, qid 0 00:20:49.488 [2024-07-15 23:47:24.355228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.488 [2024-07-15 23:47:24.355243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.488 [2024-07-15 23:47:24.355250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.488 [2024-07-15 23:47:24.355257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565840) on tqpair=0x1505540 00:20:49.488 [2024-07-15 23:47:24.355270] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:49.488 00:20:49.488 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:49.488 [2024-07-15 23:47:24.387842] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:49.488 [2024-07-15 23:47:24.387885] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837233 ] 00:20:49.488 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.488 [2024-07-15 23:47:24.418809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:49.488 [2024-07-15 23:47:24.418857] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:49.488 [2024-07-15 23:47:24.418866] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:49.488 [2024-07-15 23:47:24.418879] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:49.488 [2024-07-15 23:47:24.418888] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:49.488 [2024-07-15 23:47:24.422002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:49.488 [2024-07-15 23:47:24.422041] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe40540 0 00:20:49.488 [2024-07-15 23:47:24.428982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:49.488 [2024-07-15 23:47:24.429001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:49.488 [2024-07-15 23:47:24.429009] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:49.488 [2024-07-15 23:47:24.429015] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:49.489 [2024-07-15 23:47:24.429068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.429080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.429087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.429100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:49.489 [2024-07-15 23:47:24.429127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.436973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.436990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.436998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.437023] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:49.489 [2024-07-15 23:47:24.437034] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:49.489 [2024-07-15 23:47:24.437043] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:49.489 [2024-07-15 23:47:24.437063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.437090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.437113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.437219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.437231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.437238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.437253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:49.489 [2024-07-15 23:47:24.437270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:49.489 [2024-07-15 23:47:24.437284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.437308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.437329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.437422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.437436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.437443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.437458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:49.489 [2024-07-15 23:47:24.437473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.437485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.437509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.437530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.437636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.437651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.437658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.437673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.437690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.437716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.437737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.437821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.437833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.437840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.437847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.437854] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:49.489 [2024-07-15 23:47:24.437862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.437875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.437989] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:49.489 [2024-07-15 23:47:24.437999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.438011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.438035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.438057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.438159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.438171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.438178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.438193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:49.489 [2024-07-15 23:47:24.438209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.438235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.438255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.438344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.438359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.438366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.438380] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:49.489 [2024-07-15 23:47:24.438389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:49.489 [2024-07-15 23:47:24.438402] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:49.489 [2024-07-15 23:47:24.438416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:49.489 [2024-07-15 23:47:24.438430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.489 [2024-07-15 23:47:24.438449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.489 [2024-07-15 23:47:24.438470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.489 [2024-07-15 23:47:24.438584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.489 [2024-07-15 23:47:24.438596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.489 [2024-07-15 23:47:24.438603] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438613] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=4096, cccid=0 00:20:49.489 [2024-07-15 23:47:24.438621] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea03c0) on tqpair(0xe40540): expected_datao=0, payload_size=4096 00:20:49.489 [2024-07-15 23:47:24.438628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438646] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.489 [2024-07-15 23:47:24.438676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.489 [2024-07-15 23:47:24.438682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.489 [2024-07-15 23:47:24.438700] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:49.489 [2024-07-15 23:47:24.438713] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:49.489 [2024-07-15 23:47:24.438721] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:49.489 [2024-07-15 23:47:24.438728] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:49.489 [2024-07-15 23:47:24.438736] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:49.489 [2024-07-15 23:47:24.438744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:49.489 [2024-07-15 23:47:24.438758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:49.489 [2024-07-15 23:47:24.438770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.489 [2024-07-15 23:47:24.438783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.438794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:49.490 [2024-07-15 23:47:24.438815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.490 [2024-07-15 23:47:24.438922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.438937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.438943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.438950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.438971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.438980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.438986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.438996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.490 [2024-07-15 23:47:24.439006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.490 [2024-07-15 23:47:24.439038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.490 [2024-07-15 23:47:24.439073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.490 [2024-07-15 23:47:24.439104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-07-15 23:47:24.439185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea03c0, cid 0, qid 0 00:20:49.490 [2024-07-15 23:47:24.439196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0540, cid 1, qid 0 00:20:49.490 [2024-07-15 23:47:24.439204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea06c0, cid 2, qid 0 00:20:49.490 [2024-07-15 23:47:24.439212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.490 [2024-07-15 23:47:24.439220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.490 [2024-07-15 23:47:24.439333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.439345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.439352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.439367] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:49.490 [2024-07-15 23:47:24.439375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:49.490 [2024-07-15 23:47:24.439455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.490 [2024-07-15 23:47:24.439561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.439576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.439583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.439664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439684] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.439699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.439717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-07-15 23:47:24.439738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.490 [2024-07-15 23:47:24.439841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.490 [2024-07-15 23:47:24.439856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.490 [2024-07-15 23:47:24.439863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439869] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=4096, cccid=4 00:20:49.490 [2024-07-15 23:47:24.439877] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea09c0) on tqpair(0xe40540): expected_datao=0, payload_size=4096 00:20:49.490 [2024-07-15 23:47:24.439885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439902] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.439984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.439991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.439998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.440012] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:49.490 [2024-07-15 23:47:24.440033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.440083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-07-15 23:47:24.440105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.490 [2024-07-15 23:47:24.440218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.490 [2024-07-15 23:47:24.440233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.490 [2024-07-15 23:47:24.440240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440246] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=4096, cccid=4 00:20:49.490 [2024-07-15 23:47:24.440254] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea09c0) on tqpair(0xe40540): expected_datao=0, payload_size=4096 00:20:49.490 [2024-07-15 23:47:24.440261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440278] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440288] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.440313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.440319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.440345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.490 [2024-07-15 23:47:24.440396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-07-15 23:47:24.440417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.490 [2024-07-15 23:47:24.440516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.490 [2024-07-15 23:47:24.440531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.490 [2024-07-15 23:47:24.440538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440544] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=4096, cccid=4 00:20:49.490 [2024-07-15 23:47:24.440552] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea09c0) on tqpair(0xe40540): expected_datao=0, payload_size=4096 00:20:49.490 [2024-07-15 23:47:24.440559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440576] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440585] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.490 [2024-07-15 23:47:24.440625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.490 [2024-07-15 23:47:24.440632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.490 [2024-07-15 23:47:24.440639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.490 [2024-07-15 23:47:24.440651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:49.490 [2024-07-15 23:47:24.440680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:49.491 [2024-07-15 23:47:24.440691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:49.491 [2024-07-15 23:47:24.440700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:49.491 [2024-07-15 23:47:24.440709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:49.491 [2024-07-15 23:47:24.440717] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:49.491 [2024-07-15 23:47:24.440725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:49.491 [2024-07-15 23:47:24.440734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:49.491 [2024-07-15 23:47:24.440752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.440766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.440777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.440789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.440796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.440802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.440811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:49.491 [2024-07-15 23:47:24.440836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.491 [2024-07-15 23:47:24.440848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0b40, cid 5, qid 0 00:20:49.491 [2024-07-15 23:47:24.440950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.491 [2024-07-15 23:47:24.444990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.491 [2024-07-15 23:47:24.444998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.491 [2024-07-15 23:47:24.445015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.491 [2024-07-15 23:47:24.445024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.491 [2024-07-15 23:47:24.445030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0b40) on tqpair=0xe40540 00:20:49.491 [2024-07-15 23:47:24.445068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0b40, cid 5, qid 0 00:20:49.491 [2024-07-15 23:47:24.445256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.491 [2024-07-15 23:47:24.445269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.491 [2024-07-15 23:47:24.445276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0b40) on tqpair=0xe40540 00:20:49.491 [2024-07-15 23:47:24.445298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0b40, cid 5, qid 0 00:20:49.491 [2024-07-15 23:47:24.445424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.491 [2024-07-15 23:47:24.445437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.491 [2024-07-15 23:47:24.445443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0b40) on tqpair=0xe40540 00:20:49.491 [2024-07-15 23:47:24.445465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0b40, cid 5, qid 0 00:20:49.491 [2024-07-15 23:47:24.445594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.491 [2024-07-15 23:47:24.445606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.491 [2024-07-15 23:47:24.445613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0b40) on tqpair=0xe40540 00:20:49.491 [2024-07-15 23:47:24.445643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.445742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe40540) 00:20:49.491 [2024-07-15 23:47:24.445752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-07-15 23:47:24.445773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0b40, cid 5, qid 0 00:20:49.491 [2024-07-15 23:47:24.445785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea09c0, cid 4, qid 0 00:20:49.491 [2024-07-15 23:47:24.445792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0cc0, cid 6, qid 0 00:20:49.491 [2024-07-15 23:47:24.445800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0e40, cid 7, qid 0 00:20:49.491 [2024-07-15 23:47:24.446004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.491 [2024-07-15 23:47:24.446020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.491 [2024-07-15 23:47:24.446027] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446034] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=8192, cccid=5 00:20:49.491 [2024-07-15 23:47:24.446042] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea0b40) on tqpair(0xe40540): expected_datao=0, payload_size=8192 00:20:49.491 [2024-07-15 23:47:24.446049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446060] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446068] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.491 [2024-07-15 23:47:24.446085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.491 [2024-07-15 23:47:24.446092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446098] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=512, cccid=4 00:20:49.491 [2024-07-15 23:47:24.446106] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea09c0) on tqpair(0xe40540): expected_datao=0, payload_size=512 00:20:49.491 [2024-07-15 23:47:24.446117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446127] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446135] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.491 [2024-07-15 23:47:24.446152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.491 [2024-07-15 23:47:24.446158] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446164] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=512, cccid=6 00:20:49.491 [2024-07-15 23:47:24.446172] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea0cc0) on tqpair(0xe40540): expected_datao=0, payload_size=512 00:20:49.491 [2024-07-15 23:47:24.446179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.491 [2024-07-15 23:47:24.446189] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.446196] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.446205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.492 [2024-07-15 23:47:24.446213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.492 [2024-07-15 23:47:24.446220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.446226] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe40540): datao=0, datal=4096, cccid=7 00:20:49.492 [2024-07-15 23:47:24.446234] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xea0e40) on tqpair(0xe40540): expected_datao=0, payload_size=4096 00:20:49.492 [2024-07-15 23:47:24.446241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.446262] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.446287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.487069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.492 [2024-07-15 23:47:24.487088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.492 [2024-07-15 23:47:24.487095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.487102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0b40) on tqpair=0xe40540 00:20:49.492 [2024-07-15 23:47:24.487128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.492 [2024-07-15 23:47:24.487140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.492 [2024-07-15 23:47:24.487146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.487153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea09c0) on tqpair=0xe40540 00:20:49.492 [2024-07-15 23:47:24.487168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.492 [2024-07-15 23:47:24.487179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.492 [2024-07-15 23:47:24.487186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.487192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0cc0) on tqpair=0xe40540 00:20:49.492 [2024-07-15 23:47:24.487203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.492 [2024-07-15 23:47:24.487212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.492 [2024-07-15 23:47:24.487219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.492 [2024-07-15 23:47:24.487225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0e40) on tqpair=0xe40540 00:20:49.492 ===================================================== 00:20:49.492 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.492 ===================================================== 00:20:49.492 Controller Capabilities/Features 00:20:49.492 ================================ 00:20:49.492 Vendor ID: 8086 00:20:49.492 Subsystem Vendor ID: 8086 00:20:49.492 Serial Number: SPDK00000000000001 00:20:49.492 Model Number: SPDK bdev Controller 00:20:49.492 Firmware Version: 24.09 00:20:49.492 Recommended Arb Burst: 6 00:20:49.492 IEEE OUI Identifier: e4 d2 5c 00:20:49.492 Multi-path I/O 00:20:49.492 May have multiple subsystem ports: Yes 00:20:49.492 May have multiple controllers: Yes 00:20:49.492 Associated with SR-IOV VF: No 00:20:49.492 Max Data Transfer Size: 131072 00:20:49.492 Max Number of Namespaces: 32 00:20:49.492 Max Number of I/O Queues: 127 00:20:49.492 NVMe Specification Version (VS): 1.3 00:20:49.492 NVMe Specification Version (Identify): 1.3 00:20:49.492 Maximum Queue Entries: 128 00:20:49.492 Contiguous Queues Required: Yes 00:20:49.492 Arbitration Mechanisms Supported 00:20:49.492 Weighted Round Robin: Not Supported 00:20:49.492 Vendor Specific: Not Supported 00:20:49.492 Reset Timeout: 15000 ms 00:20:49.492 Doorbell Stride: 4 bytes 00:20:49.492 NVM Subsystem Reset: Not Supported 00:20:49.492 Command Sets Supported 00:20:49.492 NVM Command Set: Supported 00:20:49.492 Boot Partition: Not Supported 00:20:49.492 Memory Page Size Minimum: 4096 bytes 00:20:49.492 Memory Page Size Maximum: 4096 bytes 00:20:49.492 Persistent Memory Region: Not Supported 00:20:49.492 Optional Asynchronous Events Supported 00:20:49.492 Namespace Attribute Notices: Supported 00:20:49.492 Firmware Activation Notices: Not Supported 00:20:49.492 ANA Change Notices: Not Supported 00:20:49.492 PLE Aggregate Log Change Notices: Not Supported 00:20:49.492 LBA Status Info Alert Notices: Not Supported 00:20:49.492 EGE Aggregate Log Change Notices: Not Supported 00:20:49.492 Normal NVM Subsystem Shutdown event: Not Supported 00:20:49.492 Zone Descriptor Change Notices: Not Supported 00:20:49.492 Discovery Log Change Notices: Not Supported 00:20:49.492 Controller Attributes 00:20:49.492 128-bit Host Identifier: Supported 00:20:49.492 Non-Operational Permissive Mode: Not Supported 00:20:49.492 NVM Sets: Not Supported 00:20:49.492 Read Recovery Levels: Not Supported 00:20:49.492 Endurance Groups: Not Supported 00:20:49.492 Predictable Latency Mode: Not Supported 00:20:49.492 Traffic Based Keep ALive: Not Supported 00:20:49.492 Namespace Granularity: Not Supported 00:20:49.492 SQ Associations: Not Supported 00:20:49.492 UUID List: Not Supported 00:20:49.492 Multi-Domain Subsystem: Not Supported 00:20:49.492 Fixed Capacity Management: Not Supported 00:20:49.492 Variable Capacity Management: Not Supported 00:20:49.492 Delete Endurance Group: Not Supported 00:20:49.492 Delete NVM Set: Not Supported 00:20:49.492 Extended LBA Formats Supported: Not Supported 00:20:49.492 Flexible Data Placement Supported: Not Supported 00:20:49.492 00:20:49.492 Controller Memory Buffer Support 00:20:49.492 ================================ 00:20:49.492 Supported: No 00:20:49.492 00:20:49.492 Persistent Memory Region Support 00:20:49.492 ================================ 00:20:49.492 Supported: No 00:20:49.492 00:20:49.492 Admin Command Set Attributes 00:20:49.492 ============================ 00:20:49.492 Security Send/Receive: Not Supported 00:20:49.492 Format NVM: Not Supported 00:20:49.492 Firmware Activate/Download: Not Supported 00:20:49.492 Namespace Management: Not Supported 00:20:49.492 Device Self-Test: Not Supported 00:20:49.492 Directives: Not Supported 00:20:49.492 NVMe-MI: Not Supported 00:20:49.492 Virtualization Management: Not Supported 00:20:49.492 Doorbell Buffer Config: Not Supported 00:20:49.492 Get LBA Status Capability: Not Supported 00:20:49.492 Command & Feature Lockdown Capability: Not Supported 00:20:49.492 Abort Command Limit: 4 00:20:49.492 Async Event Request Limit: 4 00:20:49.492 Number of Firmware Slots: N/A 00:20:49.492 Firmware Slot 1 Read-Only: N/A 00:20:49.492 Firmware Activation Without Reset: N/A 00:20:49.492 Multiple Update Detection Support: N/A 00:20:49.492 Firmware Update Granularity: No Information Provided 00:20:49.492 Per-Namespace SMART Log: No 00:20:49.492 Asymmetric Namespace Access Log Page: Not Supported 00:20:49.492 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:49.492 Command Effects Log Page: Supported 00:20:49.492 Get Log Page Extended Data: Supported 00:20:49.492 Telemetry Log Pages: Not Supported 00:20:49.492 Persistent Event Log Pages: Not Supported 00:20:49.492 Supported Log Pages Log Page: May Support 00:20:49.492 Commands Supported & Effects Log Page: Not Supported 00:20:49.492 Feature Identifiers & Effects Log Page:May Support 00:20:49.492 NVMe-MI Commands & Effects Log Page: May Support 00:20:49.492 Data Area 4 for Telemetry Log: Not Supported 00:20:49.492 Error Log Page Entries Supported: 128 00:20:49.492 Keep Alive: Supported 00:20:49.492 Keep Alive Granularity: 10000 ms 00:20:49.492 00:20:49.492 NVM Command Set Attributes 00:20:49.492 ========================== 00:20:49.492 Submission Queue Entry Size 00:20:49.492 Max: 64 00:20:49.492 Min: 64 00:20:49.492 Completion Queue Entry Size 00:20:49.492 Max: 16 00:20:49.492 Min: 16 00:20:49.492 Number of Namespaces: 32 00:20:49.492 Compare Command: Supported 00:20:49.492 Write Uncorrectable Command: Not Supported 00:20:49.492 Dataset Management Command: Supported 00:20:49.492 Write Zeroes Command: Supported 00:20:49.492 Set Features Save Field: Not Supported 00:20:49.492 Reservations: Supported 00:20:49.492 Timestamp: Not Supported 00:20:49.492 Copy: Supported 00:20:49.492 Volatile Write Cache: Present 00:20:49.492 Atomic Write Unit (Normal): 1 00:20:49.492 Atomic Write Unit (PFail): 1 00:20:49.492 Atomic Compare & Write Unit: 1 00:20:49.492 Fused Compare & Write: Supported 00:20:49.492 Scatter-Gather List 00:20:49.492 SGL Command Set: Supported 00:20:49.492 SGL Keyed: Supported 00:20:49.492 SGL Bit Bucket Descriptor: Not Supported 00:20:49.492 SGL Metadata Pointer: Not Supported 00:20:49.492 Oversized SGL: Not Supported 00:20:49.492 SGL Metadata Address: Not Supported 00:20:49.492 SGL Offset: Supported 00:20:49.492 Transport SGL Data Block: Not Supported 00:20:49.492 Replay Protected Memory Block: Not Supported 00:20:49.492 00:20:49.492 Firmware Slot Information 00:20:49.492 ========================= 00:20:49.492 Active slot: 1 00:20:49.492 Slot 1 Firmware Revision: 24.09 00:20:49.492 00:20:49.492 00:20:49.492 Commands Supported and Effects 00:20:49.492 ============================== 00:20:49.492 Admin Commands 00:20:49.492 -------------- 00:20:49.492 Get Log Page (02h): Supported 00:20:49.492 Identify (06h): Supported 00:20:49.492 Abort (08h): Supported 00:20:49.492 Set Features (09h): Supported 00:20:49.492 Get Features (0Ah): Supported 00:20:49.492 Asynchronous Event Request (0Ch): Supported 00:20:49.492 Keep Alive (18h): Supported 00:20:49.492 I/O Commands 00:20:49.492 ------------ 00:20:49.492 Flush (00h): Supported LBA-Change 00:20:49.493 Write (01h): Supported LBA-Change 00:20:49.493 Read (02h): Supported 00:20:49.493 Compare (05h): Supported 00:20:49.493 Write Zeroes (08h): Supported LBA-Change 00:20:49.493 Dataset Management (09h): Supported LBA-Change 00:20:49.493 Copy (19h): Supported LBA-Change 00:20:49.493 00:20:49.493 Error Log 00:20:49.493 ========= 00:20:49.493 00:20:49.493 Arbitration 00:20:49.493 =========== 00:20:49.493 Arbitration Burst: 1 00:20:49.493 00:20:49.493 Power Management 00:20:49.493 ================ 00:20:49.493 Number of Power States: 1 00:20:49.493 Current Power State: Power State #0 00:20:49.493 Power State #0: 00:20:49.493 Max Power: 0.00 W 00:20:49.493 Non-Operational State: Operational 00:20:49.493 Entry Latency: Not Reported 00:20:49.493 Exit Latency: Not Reported 00:20:49.493 Relative Read Throughput: 0 00:20:49.493 Relative Read Latency: 0 00:20:49.493 Relative Write Throughput: 0 00:20:49.493 Relative Write Latency: 0 00:20:49.493 Idle Power: Not Reported 00:20:49.493 Active Power: Not Reported 00:20:49.493 Non-Operational Permissive Mode: Not Supported 00:20:49.493 00:20:49.493 Health Information 00:20:49.493 ================== 00:20:49.493 Critical Warnings: 00:20:49.493 Available Spare Space: OK 00:20:49.493 Temperature: OK 00:20:49.493 Device Reliability: OK 00:20:49.493 Read Only: No 00:20:49.493 Volatile Memory Backup: OK 00:20:49.493 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:49.493 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:49.493 Available Spare: 0% 00:20:49.493 Available Spare Threshold: 0% 00:20:49.493 Life Percentage Used:[2024-07-15 23:47:24.487354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.487377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.487400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0e40, cid 7, qid 0 00:20:49.493 [2024-07-15 23:47:24.487507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.487520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.487527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0e40) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487577] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:49.493 [2024-07-15 23:47:24.487597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea03c0) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.493 [2024-07-15 23:47:24.487617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0540) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.493 [2024-07-15 23:47:24.487632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea06c0) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.493 [2024-07-15 23:47:24.487648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.493 [2024-07-15 23:47:24.487668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.487694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.487715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.487806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.487821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.487828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.487846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.487861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.487871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.487897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.488028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.488042] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:49.493 [2024-07-15 23:47:24.488050] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:49.493 [2024-07-15 23:47:24.488066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.488096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.488117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.488228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.488251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.488277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.488298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.488428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.488451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.488477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.488498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.488602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.488626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.488652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.488672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.488776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.488799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.488814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.488828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.488850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.488935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.488947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.492961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.492976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.492995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.493019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.493026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe40540) 00:20:49.493 [2024-07-15 23:47:24.493037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.493 [2024-07-15 23:47:24.493059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xea0840, cid 3, qid 0 00:20:49.493 [2024-07-15 23:47:24.493204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.493 [2024-07-15 23:47:24.493216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.493 [2024-07-15 23:47:24.493223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.493 [2024-07-15 23:47:24.493230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xea0840) on tqpair=0xe40540 00:20:49.493 [2024-07-15 23:47:24.493243] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:49.493 0% 00:20:49.494 Data Units Read: 0 00:20:49.494 Data Units Written: 0 00:20:49.494 Host Read Commands: 0 00:20:49.494 Host Write Commands: 0 00:20:49.494 Controller Busy Time: 0 minutes 00:20:49.494 Power Cycles: 0 00:20:49.494 Power On Hours: 0 hours 00:20:49.494 Unsafe Shutdowns: 0 00:20:49.494 Unrecoverable Media Errors: 0 00:20:49.494 Lifetime Error Log Entries: 0 00:20:49.494 Warning Temperature Time: 0 minutes 00:20:49.494 Critical Temperature Time: 0 minutes 00:20:49.494 00:20:49.494 Number of Queues 00:20:49.494 ================ 00:20:49.494 Number of I/O Submission Queues: 127 00:20:49.494 Number of I/O Completion Queues: 127 00:20:49.494 00:20:49.494 Active Namespaces 00:20:49.494 ================= 00:20:49.494 Namespace ID:1 00:20:49.494 Error Recovery Timeout: Unlimited 00:20:49.494 Command Set Identifier: NVM (00h) 00:20:49.494 Deallocate: Supported 00:20:49.494 Deallocated/Unwritten Error: Not Supported 00:20:49.494 Deallocated Read Value: Unknown 00:20:49.494 Deallocate in Write Zeroes: Not Supported 00:20:49.494 Deallocated Guard Field: 0xFFFF 00:20:49.494 Flush: Supported 00:20:49.494 Reservation: Supported 00:20:49.494 Namespace Sharing Capabilities: Multiple Controllers 00:20:49.494 Size (in LBAs): 131072 (0GiB) 00:20:49.494 Capacity (in LBAs): 131072 (0GiB) 00:20:49.494 Utilization (in LBAs): 131072 (0GiB) 00:20:49.494 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:49.494 EUI64: ABCDEF0123456789 00:20:49.494 UUID: 6634b74d-84fb-485d-8500-251d7a806324 00:20:49.494 Thin Provisioning: Not Supported 00:20:49.494 Per-NS Atomic Units: Yes 00:20:49.494 Atomic Boundary Size (Normal): 0 00:20:49.494 Atomic Boundary Size (PFail): 0 00:20:49.494 Atomic Boundary Offset: 0 00:20:49.494 Maximum Single Source Range Length: 65535 00:20:49.494 Maximum Copy Length: 65535 00:20:49.494 Maximum Source Range Count: 1 00:20:49.494 NGUID/EUI64 Never Reused: No 00:20:49.494 Namespace Write Protected: No 00:20:49.494 Number of LBA Formats: 1 00:20:49.494 Current LBA Format: LBA Format #00 00:20:49.494 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:49.494 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.494 rmmod nvme_tcp 00:20:49.494 rmmod nvme_fabrics 00:20:49.494 rmmod nvme_keyring 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3837083 ']' 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3837083 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3837083 ']' 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3837083 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3837083 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3837083' 00:20:49.494 killing process with pid 3837083 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3837083 00:20:49.494 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3837083 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.754 23:47:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.315 23:47:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.315 00:20:52.315 real 0m5.532s 00:20:52.315 user 0m4.497s 00:20:52.315 sys 0m1.908s 00:20:52.315 23:47:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:52.315 23:47:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:52.315 ************************************ 00:20:52.315 END TEST nvmf_identify 00:20:52.315 ************************************ 00:20:52.315 23:47:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:52.315 23:47:26 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:52.315 23:47:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:52.315 23:47:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.315 23:47:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:52.315 ************************************ 00:20:52.315 START TEST nvmf_perf 00:20:52.315 ************************************ 00:20:52.315 23:47:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:52.315 * Looking for test storage... 00:20:52.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.315 23:47:27 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.316 23:47:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:54.217 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:54.217 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:54.217 Found net devices under 0000:09:00.0: cvl_0_0 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:54.217 Found net devices under 0000:09:00.1: cvl_0_1 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.217 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:54.218 00:20:54.218 --- 10.0.0.2 ping statistics --- 00:20:54.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.218 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:54.218 00:20:54.218 --- 10.0.0.1 ping statistics --- 00:20:54.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.218 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3839156 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3839156 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3839156 ']' 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.218 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:54.475 [2024-07-15 23:47:29.344805] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:20:54.475 [2024-07-15 23:47:29.344905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.475 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.475 [2024-07-15 23:47:29.411908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.475 [2024-07-15 23:47:29.527601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.475 [2024-07-15 23:47:29.527657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.475 [2024-07-15 23:47:29.527686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.475 [2024-07-15 23:47:29.527697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.475 [2024-07-15 23:47:29.527707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.475 [2024-07-15 23:47:29.527787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.475 [2024-07-15 23:47:29.527858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.475 [2024-07-15 23:47:29.527910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.475 [2024-07-15 23:47:29.527907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:54.732 23:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:58.005 23:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:58.005 23:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:58.005 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:20:58.005 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:58.263 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:58.263 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:20:58.263 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:58.263 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:58.263 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.521 [2024-07-15 23:47:33.568223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.521 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.778 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:58.778 23:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.036 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:59.036 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:59.294 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.552 [2024-07-15 23:47:34.551761] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.552 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:59.809 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:20:59.809 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:59.809 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:59.809 23:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:01.182 Initializing NVMe Controllers 00:21:01.182 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:21:01.182 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:21:01.182 Initialization complete. Launching workers. 00:21:01.182 ======================================================== 00:21:01.182 Latency(us) 00:21:01.182 Device Information : IOPS MiB/s Average min max 00:21:01.182 PCIE (0000:0b:00.0) NSID 1 from core 0: 86042.86 336.10 371.44 42.28 5460.62 00:21:01.183 ======================================================== 00:21:01.183 Total : 86042.86 336.10 371.44 42.28 5460.62 00:21:01.183 00:21:01.183 23:47:36 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.183 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.555 Initializing NVMe Controllers 00:21:02.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:02.555 Initialization complete. Launching workers. 00:21:02.555 ======================================================== 00:21:02.555 Latency(us) 00:21:02.555 Device Information : IOPS MiB/s Average min max 00:21:02.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.79 0.33 12124.78 166.41 45975.19 00:21:02.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.87 0.20 19585.41 6976.80 54870.36 00:21:02.555 ======================================================== 00:21:02.555 Total : 135.66 0.53 14977.37 166.41 54870.36 00:21:02.555 00:21:02.555 23:47:37 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.555 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.929 Initializing NVMe Controllers 00:21:03.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:03.929 Initialization complete. Launching workers. 00:21:03.929 ======================================================== 00:21:03.929 Latency(us) 00:21:03.929 Device Information : IOPS MiB/s Average min max 00:21:03.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8540.52 33.36 3747.42 684.57 11086.05 00:21:03.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3863.88 15.09 8339.54 6812.45 17859.08 00:21:03.929 ======================================================== 00:21:03.929 Total : 12404.40 48.45 5177.83 684.57 17859.08 00:21:03.929 00:21:03.929 23:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:03.929 23:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:03.929 23:47:38 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.929 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.459 Initializing NVMe Controllers 00:21:06.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.459 Controller IO queue size 128, less than required. 00:21:06.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.459 Controller IO queue size 128, less than required. 00:21:06.459 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:06.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:06.459 Initialization complete. Launching workers. 00:21:06.459 ======================================================== 00:21:06.459 Latency(us) 00:21:06.459 Device Information : IOPS MiB/s Average min max 00:21:06.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1623.15 405.79 80555.47 44722.47 123669.60 00:21:06.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.88 142.47 233316.67 125414.75 391221.44 00:21:06.459 ======================================================== 00:21:06.459 Total : 2193.02 548.26 120251.79 44722.47 391221.44 00:21:06.459 00:21:06.459 23:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:06.459 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.716 No valid NVMe controllers or AIO or URING devices found 00:21:06.716 Initializing NVMe Controllers 00:21:06.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.716 Controller IO queue size 128, less than required. 00:21:06.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:06.716 Controller IO queue size 128, less than required. 00:21:06.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:06.716 WARNING: Some requested NVMe devices were skipped 00:21:06.716 23:47:41 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:06.716 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.244 Initializing NVMe Controllers 00:21:09.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.244 Controller IO queue size 128, less than required. 00:21:09.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.244 Controller IO queue size 128, less than required. 00:21:09.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:09.244 Initialization complete. Launching workers. 00:21:09.244 00:21:09.244 ==================== 00:21:09.244 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:09.244 TCP transport: 00:21:09.244 polls: 8758 00:21:09.244 idle_polls: 5259 00:21:09.244 sock_completions: 3499 00:21:09.244 nvme_completions: 5479 00:21:09.244 submitted_requests: 8170 00:21:09.244 queued_requests: 1 00:21:09.244 00:21:09.244 ==================== 00:21:09.244 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:09.244 TCP transport: 00:21:09.244 polls: 11948 00:21:09.244 idle_polls: 8199 00:21:09.244 sock_completions: 3749 00:21:09.244 nvme_completions: 6483 00:21:09.244 submitted_requests: 9716 00:21:09.244 queued_requests: 1 00:21:09.244 ======================================================== 00:21:09.244 Latency(us) 00:21:09.244 Device Information : IOPS MiB/s Average min max 00:21:09.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1369.50 342.37 95388.53 47959.47 145732.09 00:21:09.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1620.50 405.12 79505.67 41045.09 110698.95 00:21:09.244 ======================================================== 00:21:09.244 Total : 2990.00 747.50 86780.45 41045.09 145732.09 00:21:09.244 00:21:09.244 23:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:09.244 23:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.502 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.502 rmmod nvme_tcp 00:21:09.502 rmmod nvme_fabrics 00:21:09.502 rmmod nvme_keyring 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3839156 ']' 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3839156 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3839156 ']' 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3839156 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3839156 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3839156' 00:21:09.759 killing process with pid 3839156 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3839156 00:21:09.759 23:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3839156 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.132 23:47:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.665 23:47:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.665 00:21:13.665 real 0m21.310s 00:21:13.665 user 1m5.618s 00:21:13.665 sys 0m5.308s 00:21:13.665 23:47:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.665 23:47:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 ************************************ 00:21:13.665 END TEST nvmf_perf 00:21:13.665 ************************************ 00:21:13.665 23:47:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:13.665 23:47:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:13.665 23:47:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:13.665 23:47:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.665 23:47:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 ************************************ 00:21:13.665 START TEST nvmf_fio_host 00:21:13.665 ************************************ 00:21:13.665 23:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:13.665 * Looking for test storage... 00:21:13.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:13.665 23:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.665 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.666 23:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:15.593 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:15.593 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:15.593 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:15.594 Found net devices under 0000:09:00.0: cvl_0_0 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:15.594 Found net devices under 0000:09:00.1: cvl_0_1 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:15.594 00:21:15.594 --- 10.0.0.2 ping statistics --- 00:21:15.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.594 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:15.594 00:21:15.594 --- 10.0.0.1 ping statistics --- 00:21:15.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.594 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3843127 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3843127 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3843127 ']' 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.594 23:47:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.852 [2024-07-15 23:47:50.738271] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:21:15.852 [2024-07-15 23:47:50.738340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.852 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.852 [2024-07-15 23:47:50.804020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.852 [2024-07-15 23:47:50.909989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.852 [2024-07-15 23:47:50.910059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.852 [2024-07-15 23:47:50.910072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.852 [2024-07-15 23:47:50.910098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.852 [2024-07-15 23:47:50.910108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.852 [2024-07-15 23:47:50.910164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.852 [2024-07-15 23:47:50.910224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.852 [2024-07-15 23:47:50.910289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.852 [2024-07-15 23:47:50.910292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.110 23:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.110 23:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:16.110 23:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.367 [2024-07-15 23:47:51.256254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.367 23:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:16.367 23:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.367 23:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.367 23:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:16.624 Malloc1 00:21:16.624 23:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.881 23:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.139 23:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.396 [2024-07-15 23:47:52.296109] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.396 23:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:17.653 23:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.911 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:17.911 fio-3.35 00:21:17.911 Starting 1 thread 00:21:17.911 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.435 00:21:20.435 test: (groupid=0, jobs=1): err= 0: pid=3843482: Mon Jul 15 23:47:55 2024 00:21:20.435 read: IOPS=8973, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec) 00:21:20.435 slat (nsec): min=1963, max=278479, avg=2611.79, stdev=2761.46 00:21:20.435 clat (usec): min=2506, max=13568, avg=7787.69, stdev=637.32 00:21:20.435 lat (usec): min=2540, max=13570, avg=7790.30, stdev=637.19 00:21:20.435 clat percentiles (usec): 00:21:20.435 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7308], 00:21:20.435 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:21:20.435 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:20.435 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[12911], 00:21:20.435 | 99.99th=[13566] 00:21:20.435 bw ( KiB/s): min=35016, max=36768, per=99.98%, avg=35886.00, stdev=729.58, samples=4 00:21:20.435 iops : min= 8754, max= 9192, avg=8971.50, stdev=182.39, samples=4 00:21:20.435 write: IOPS=8998, BW=35.1MiB/s (36.9MB/s)(70.5MiB/2007msec); 0 zone resets 00:21:20.435 slat (usec): min=2, max=152, avg= 2.70, stdev= 1.64 00:21:20.435 clat (usec): min=1771, max=12758, avg=6394.36, stdev=530.72 00:21:20.435 lat (usec): min=1781, max=12760, avg=6397.06, stdev=530.66 00:21:20.435 clat percentiles (usec): 00:21:20.435 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:21:20.435 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:21:20.435 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:21:20.435 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10159], 99.95th=[11600], 00:21:20.435 | 99.99th=[12780] 00:21:20.435 bw ( KiB/s): min=35848, max=36224, per=100.00%, avg=35998.00, stdev=166.07, samples=4 00:21:20.435 iops : min= 8962, max= 9056, avg=8999.50, stdev=41.52, samples=4 00:21:20.435 lat (msec) : 2=0.02%, 4=0.12%, 10=99.70%, 20=0.16% 00:21:20.435 cpu : usr=63.71%, sys=33.45%, ctx=86, majf=0, minf=41 00:21:20.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:20.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.436 issued rwts: total=18010,18059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.436 00:21:20.436 Run status group 0 (all jobs): 00:21:20.436 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:21:20.436 WRITE: bw=35.1MiB/s (36.9MB/s), 35.1MiB/s-35.1MiB/s (36.9MB/s-36.9MB/s), io=70.5MiB (74.0MB), run=2007-2007msec 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:20.436 23:47:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.436 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:20.436 fio-3.35 00:21:20.436 Starting 1 thread 00:21:20.436 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.368 [2024-07-15 23:47:56.392701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ded40 is same with the state(5) to be set 00:21:21.368 [2024-07-15 23:47:56.392769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ded40 is same with the state(5) to be set 00:21:22.740 00:21:22.740 test: (groupid=0, jobs=1): err= 0: pid=3843810: Mon Jul 15 23:47:57 2024 00:21:22.740 read: IOPS=7827, BW=122MiB/s (128MB/s)(245MiB/2006msec) 00:21:22.740 slat (nsec): min=2868, max=93874, avg=3602.07, stdev=1669.37 00:21:22.740 clat (usec): min=1636, max=18466, avg=9277.59, stdev=2219.63 00:21:22.740 lat (usec): min=1639, max=18470, avg=9281.19, stdev=2219.65 00:21:22.740 clat percentiles (usec): 00:21:22.740 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7504], 00:21:22.740 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:21:22.740 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[13435], 00:21:22.740 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:21:22.740 | 99.99th=[17695] 00:21:22.740 bw ( KiB/s): min=53824, max=74688, per=50.44%, avg=63176.00, stdev=8757.19, samples=4 00:21:22.740 iops : min= 3364, max= 4668, avg=3948.50, stdev=547.32, samples=4 00:21:22.740 write: IOPS=4471, BW=69.9MiB/s (73.3MB/s)(129MiB/1852msec); 0 zone resets 00:21:22.740 slat (usec): min=30, max=133, avg=33.55, stdev= 4.91 00:21:22.740 clat (usec): min=4813, max=23680, avg=12763.39, stdev=2824.59 00:21:22.740 lat (usec): min=4845, max=23711, avg=12796.94, stdev=2824.85 00:21:22.740 clat percentiles (usec): 00:21:22.740 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:21:22.740 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12518], 60.00th=[13435], 00:21:22.740 | 70.00th=[14353], 80.00th=[15139], 90.00th=[16581], 95.00th=[17695], 00:21:22.740 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22938], 99.95th=[23200], 00:21:22.740 | 99.99th=[23725] 00:21:22.740 bw ( KiB/s): min=56864, max=77504, per=92.01%, avg=65832.00, stdev=8893.37, samples=4 00:21:22.740 iops : min= 3554, max= 4844, avg=4114.50, stdev=555.84, samples=4 00:21:22.740 lat (msec) : 2=0.03%, 4=0.17%, 10=50.84%, 20=48.64%, 50=0.32% 00:21:22.740 cpu : usr=74.46%, sys=23.59%, ctx=39, majf=0, minf=63 00:21:22.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:22.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.740 issued rwts: total=15702,8282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.740 00:21:22.740 Run status group 0 (all jobs): 00:21:22.740 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=245MiB (257MB), run=2006-2006msec 00:21:22.740 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=129MiB (136MB), run=1852-1852msec 00:21:22.740 23:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.997 rmmod nvme_tcp 00:21:22.997 rmmod nvme_fabrics 00:21:22.997 rmmod nvme_keyring 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3843127 ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3843127 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3843127 ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3843127 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3843127 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3843127' 00:21:22.997 killing process with pid 3843127 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3843127 00:21:22.997 23:47:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3843127 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.256 23:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.783 23:48:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.783 00:21:25.783 real 0m11.959s 00:21:25.783 user 0m34.046s 00:21:25.783 sys 0m4.207s 00:21:25.783 23:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:25.783 23:48:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 ************************************ 00:21:25.783 END TEST nvmf_fio_host 00:21:25.783 ************************************ 00:21:25.783 23:48:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:25.783 23:48:00 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:25.783 23:48:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:25.783 23:48:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.783 23:48:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 ************************************ 00:21:25.783 START TEST nvmf_failover 00:21:25.783 ************************************ 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:25.783 * Looking for test storage... 00:21:25.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.783 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.784 23:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.684 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:27.685 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:27.685 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:27.685 Found net devices under 0000:09:00.0: cvl_0_0 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:27.685 Found net devices under 0000:09:00.1: cvl_0_1 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:21:27.685 00:21:27.685 --- 10.0.0.2 ping statistics --- 00:21:27.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.685 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:27.685 00:21:27.685 --- 10.0.0.1 ping statistics --- 00:21:27.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.685 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3846185 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3846185 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3846185 ']' 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.685 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.685 [2024-07-15 23:48:02.667082] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:21:27.685 [2024-07-15 23:48:02.667171] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.685 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.685 [2024-07-15 23:48:02.732968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.943 [2024-07-15 23:48:02.840405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.943 [2024-07-15 23:48:02.840455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.943 [2024-07-15 23:48:02.840483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.943 [2024-07-15 23:48:02.840495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.943 [2024-07-15 23:48:02.840504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.943 [2024-07-15 23:48:02.840551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.943 [2024-07-15 23:48:02.840608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.943 [2024-07-15 23:48:02.840611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.943 23:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:28.200 [2024-07-15 23:48:03.188442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.200 23:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:28.457 Malloc0 00:21:28.457 23:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.713 23:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.970 23:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.227 [2024-07-15 23:48:04.188802] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.227 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:29.484 [2024-07-15 23:48:04.433540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.484 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:29.742 [2024-07-15 23:48:04.678416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3846405 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3846405 /var/tmp/bdevperf.sock 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3846405 ']' 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.742 23:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.999 23:48:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.999 23:48:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:29.999 23:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.566 NVMe0n1 00:21:30.566 23:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.823 00:21:30.823 23:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3846545 00:21:30.823 23:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.823 23:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:31.753 23:48:06 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.012 [2024-07-15 23:48:07.006847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.006925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.006953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.006976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.006988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.012 [2024-07-15 23:48:07.007058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 [2024-07-15 23:48:07.007659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x608070 is same with the state(5) to be set 00:21:32.013 23:48:07 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:35.319 23:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.319 00:21:35.658 23:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.658 [2024-07-15 23:48:10.681033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 [2024-07-15 23:48:10.681190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609620 is same with the state(5) to be set 00:21:35.658 23:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:38.940 23:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.940 [2024-07-15 23:48:13.942712] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.940 23:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:39.871 23:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:40.128 [2024-07-15 23:48:15.202653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609e30 is same with the state(5) to be set 00:21:40.128 23:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3846545 00:21:46.682 0 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3846405 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3846405 ']' 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3846405 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3846405 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3846405' 00:21:46.682 killing process with pid 3846405 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3846405 00:21:46.682 23:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3846405 00:21:46.682 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.682 [2024-07-15 23:48:04.743565] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:21:46.682 [2024-07-15 23:48:04.743643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846405 ] 00:21:46.682 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.682 [2024-07-15 23:48:04.802924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.682 [2024-07-15 23:48:04.915186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.682 Running I/O for 15 seconds... 00:21:46.682 [2024-07-15 23:48:07.009347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.009976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.009991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.682 [2024-07-15 23:48:07.010216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.682 [2024-07-15 23:48:07.010230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.683 [2024-07-15 23:48:07.010260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.683 [2024-07-15 23:48:07.010290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.683 [2024-07-15 23:48:07.010320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.683 [2024-07-15 23:48:07.010349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.010970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.010985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.683 [2024-07-15 23:48:07.011518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.683 [2024-07-15 23:48:07.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.011976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.011990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.684 [2024-07-15 23:48:07.012204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.684 [2024-07-15 23:48:07.012233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.684 [2024-07-15 23:48:07.012262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.684 [2024-07-15 23:48:07.012291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.684 [2024-07-15 23:48:07.012320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.684 [2024-07-15 23:48:07.012808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.684 [2024-07-15 23:48:07.012823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.012852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.012881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.012910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.012992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.685 [2024-07-15 23:48:07.013237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.685 [2024-07-15 23:48:07.013282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.685 [2024-07-15 23:48:07.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:21:46.685 [2024-07-15 23:48:07.013307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013368] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x124e390 was disconnected and freed. reset controller. 00:21:46.685 [2024-07-15 23:48:07.013385] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:46.685 [2024-07-15 23:48:07.013420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.685 [2024-07-15 23:48:07.013438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.685 [2024-07-15 23:48:07.013466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.685 [2024-07-15 23:48:07.013493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.685 [2024-07-15 23:48:07.013519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:07.013531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.685 [2024-07-15 23:48:07.013592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12280f0 (9): Bad file descriptor 00:21:46.685 [2024-07-15 23:48:07.016801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.685 [2024-07-15 23:48:07.178921] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.685 [2024-07-15 23:48:10.682002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.685 [2024-07-15 23:48:10.682669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.685 [2024-07-15 23:48:10.682683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.682984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.682999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.686 [2024-07-15 23:48:10.683217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.686 [2024-07-15 23:48:10.683748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.686 [2024-07-15 23:48:10.683763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.683975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.684979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.684994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.685009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.685024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.687 [2024-07-15 23:48:10.685038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.687 [2024-07-15 23:48:10.685053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.688 [2024-07-15 23:48:10.685627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110640 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110648 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110656 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110664 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110672 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110680 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.685944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.685966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.685979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110688 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.686003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.686027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.686038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109984 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.686051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.688 [2024-07-15 23:48:10.686075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.688 [2024-07-15 23:48:10.686087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109992 len:8 PRP1 0x0 PRP2 0x0 00:21:46.688 [2024-07-15 23:48:10.686100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686163] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13f2d80 was disconnected and freed. reset controller. 00:21:46.688 [2024-07-15 23:48:10.686181] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:46.688 [2024-07-15 23:48:10.686215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.688 [2024-07-15 23:48:10.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.688 [2024-07-15 23:48:10.686262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.688 [2024-07-15 23:48:10.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.688 [2024-07-15 23:48:10.686325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:10.686338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.688 [2024-07-15 23:48:10.686377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12280f0 (9): Bad file descriptor 00:21:46.688 [2024-07-15 23:48:10.689600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.688 [2024-07-15 23:48:10.720156] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.688 [2024-07-15 23:48:15.203843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.688 [2024-07-15 23:48:15.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.688 [2024-07-15 23:48:15.203922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.688 [2024-07-15 23:48:15.203938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.203962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.203978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.203994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.689 [2024-07-15 23:48:15.204824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.204852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.204879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.204906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.204933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.204973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.204989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.689 [2024-07-15 23:48:15.205205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.689 [2024-07-15 23:48:15.205219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.205949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.205986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.690 [2024-07-15 23:48:15.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.690 [2024-07-15 23:48:15.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.206985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.691 [2024-07-15 23:48:15.207235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38216 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38232 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38240 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38248 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38256 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38264 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.691 [2024-07-15 23:48:15.207621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.691 [2024-07-15 23:48:15.207632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.691 [2024-07-15 23:48:15.207643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38272 len:8 PRP1 0x0 PRP2 0x0 00:21:46.691 [2024-07-15 23:48:15.207655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38280 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38288 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38296 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38304 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38312 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.207927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38320 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.207978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.207994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.208007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37560 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.208020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.692 [2024-07-15 23:48:15.208045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.692 [2024-07-15 23:48:15.208056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37568 len:8 PRP1 0x0 PRP2 0x0 00:21:46.692 [2024-07-15 23:48:15.208069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208127] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13f2b70 was disconnected and freed. reset controller. 00:21:46.692 [2024-07-15 23:48:15.208145] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:46.692 [2024-07-15 23:48:15.208178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.692 [2024-07-15 23:48:15.208196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.692 [2024-07-15 23:48:15.208230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.692 [2024-07-15 23:48:15.208257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.692 [2024-07-15 23:48:15.208284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.692 [2024-07-15 23:48:15.208297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.692 [2024-07-15 23:48:15.211565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.692 [2024-07-15 23:48:15.211604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12280f0 (9): Bad file descriptor 00:21:46.692 [2024-07-15 23:48:15.362262] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.692 00:21:46.692 Latency(us) 00:21:46.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.692 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:46.692 Verification LBA range: start 0x0 length 0x4000 00:21:46.692 NVMe0n1 : 15.02 8485.77 33.15 882.63 0.00 13635.55 537.03 17185.00 00:21:46.692 =================================================================================================================== 00:21:46.692 Total : 8485.77 33.15 882.63 0.00 13635.55 537.03 17185.00 00:21:46.692 Received shutdown signal, test time was about 15.000000 seconds 00:21:46.692 00:21:46.692 Latency(us) 00:21:46.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.692 =================================================================================================================== 00:21:46.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3848895 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3848895 /var/tmp/bdevperf.sock 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3848895 ']' 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:46.692 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.950 [2024-07-15 23:48:21.807034] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.950 23:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:46.950 [2024-07-15 23:48:22.063704] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:47.207 23:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.464 NVMe0n1 00:21:47.464 23:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.720 00:21:47.720 23:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.292 00:21:48.292 23:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:48.292 23:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:48.550 23:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.806 23:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:52.080 23:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.080 23:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:52.080 23:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3849570 00:21:52.080 23:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.080 23:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3849570 00:21:53.013 0 00:21:53.013 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:53.013 [2024-07-15 23:48:21.291458] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:21:53.013 [2024-07-15 23:48:21.291538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848895 ] 00:21:53.013 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.013 [2024-07-15 23:48:21.350448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.013 [2024-07-15 23:48:21.456458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.013 [2024-07-15 23:48:23.684710] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:53.013 [2024-07-15 23:48:23.684788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.013 [2024-07-15 23:48:23.684810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.013 [2024-07-15 23:48:23.684826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.013 [2024-07-15 23:48:23.684840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.013 [2024-07-15 23:48:23.684853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.013 [2024-07-15 23:48:23.684882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.013 [2024-07-15 23:48:23.684895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.013 [2024-07-15 23:48:23.684908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.013 [2024-07-15 23:48:23.684927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:53.013 [2024-07-15 23:48:23.684974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:53.013 [2024-07-15 23:48:23.685006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f90f0 (9): Bad file descriptor 00:21:53.013 [2024-07-15 23:48:23.689963] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:53.013 Running I/O for 1 seconds... 00:21:53.013 00:21:53.013 Latency(us) 00:21:53.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.013 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:53.013 Verification LBA range: start 0x0 length 0x4000 00:21:53.013 NVMe0n1 : 1.01 8558.56 33.43 0.00 0.00 14897.69 2572.89 11505.21 00:21:53.013 =================================================================================================================== 00:21:53.013 Total : 8558.56 33.43 0.00 0.00 14897.69 2572.89 11505.21 00:21:53.013 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.013 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:53.270 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.836 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.836 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:53.836 23:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:54.093 23:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3848895 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3848895 ']' 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3848895 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3848895 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3848895' 00:21:57.366 killing process with pid 3848895 00:21:57.366 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3848895 00:21:57.367 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3848895 00:21:57.623 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:57.623 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.879 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:57.879 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.880 rmmod nvme_tcp 00:21:57.880 rmmod nvme_fabrics 00:21:57.880 rmmod nvme_keyring 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3846185 ']' 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3846185 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3846185 ']' 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3846185 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.880 23:48:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3846185 00:21:58.136 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:58.136 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:58.136 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3846185' 00:21:58.136 killing process with pid 3846185 00:21:58.136 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3846185 00:21:58.136 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3846185 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.393 23:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.293 23:48:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.293 00:22:00.293 real 0m35.023s 00:22:00.293 user 2m3.233s 00:22:00.293 sys 0m5.774s 00:22:00.293 23:48:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.293 23:48:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:00.293 ************************************ 00:22:00.293 END TEST nvmf_failover 00:22:00.293 ************************************ 00:22:00.293 23:48:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:00.293 23:48:35 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:00.293 23:48:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.293 23:48:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.293 23:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.293 ************************************ 00:22:00.293 START TEST nvmf_host_discovery 00:22:00.293 ************************************ 00:22:00.293 23:48:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:00.551 * Looking for test storage... 00:22:00.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:00.551 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.552 23:48:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:02.479 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:02.479 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.479 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:02.480 Found net devices under 0000:09:00.0: cvl_0_0 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:02.480 Found net devices under 0000:09:00.1: cvl_0_1 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.480 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:22:02.739 00:22:02.739 --- 10.0.0.2 ping statistics --- 00:22:02.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.739 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:22:02.739 00:22:02.739 --- 10.0.0.1 ping statistics --- 00:22:02.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.739 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3852169 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3852169 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3852169 ']' 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.739 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.739 [2024-07-15 23:48:37.687559] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:02.739 [2024-07-15 23:48:37.687637] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.739 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.739 [2024-07-15 23:48:37.750629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.739 [2024-07-15 23:48:37.859455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.739 [2024-07-15 23:48:37.859524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.739 [2024-07-15 23:48:37.859538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.739 [2024-07-15 23:48:37.859551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.739 [2024-07-15 23:48:37.859561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.739 [2024-07-15 23:48:37.859613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 [2024-07-15 23:48:37.982345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 [2024-07-15 23:48:37.990504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.997 23:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 null0 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 null1 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3852303 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3852303 /tmp/host.sock 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3852303 ']' 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:02.997 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.997 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 [2024-07-15 23:48:38.060276] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:02.997 [2024-07-15 23:48:38.060372] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852303 ] 00:22:02.997 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.997 [2024-07-15 23:48:38.117524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.256 [2024-07-15 23:48:38.223521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.256 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.514 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.515 [2024-07-15 23:48:38.620214] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.515 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:03.773 23:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:04.340 [2024-07-15 23:48:39.399777] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.340 [2024-07-15 23:48:39.399813] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.340 [2024-07-15 23:48:39.399834] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.598 [2024-07-15 23:48:39.526259] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:04.598 [2024-07-15 23:48:39.674049] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.598 [2024-07-15 23:48:39.674075] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:04.855 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:04.856 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.114 23:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.114 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.115 [2024-07-15 23:48:40.064473] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.115 [2024-07-15 23:48:40.064848] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:05.115 [2024-07-15 23:48:40.064887] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:05.115 23:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:05.115 [2024-07-15 23:48:40.193724] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:05.679 [2024-07-15 23:48:40.501229] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.679 [2024-07-15 23:48:40.501264] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.679 [2024-07-15 23:48:40.501273] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.245 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.245 [2024-07-15 23:48:41.284613] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:06.245 [2024-07-15 23:48:41.284652] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:06.245 [2024-07-15 23:48:41.285295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.245 [2024-07-15 23:48:41.285342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.245 [2024-07-15 23:48:41.285359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.246 [2024-07-15 23:48:41.285388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.246 [2024-07-15 23:48:41.285402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.246 [2024-07-15 23:48:41.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.246 [2024-07-15 23:48:41.285430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.246 [2024-07-15 23:48:41.285444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.246 [2024-07-15 23:48:41.285457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:06.246 [2024-07-15 23:48:41.295288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 [2024-07-15 23:48:41.305329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.305652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.305682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.305699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.305722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.305756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.305773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.305789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.305808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 [2024-07-15 23:48:41.315424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.315628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.315656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.315672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.315693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.315713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.315726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.315739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.315757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 [2024-07-15 23:48:41.325506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.325705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.325732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.325747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.325768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.325788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.325801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.325815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.325833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.246 [2024-07-15 23:48:41.335590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:06.246 [2024-07-15 23:48:41.335764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.335793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.335811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.335833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.335854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.335869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.335882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.335901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:06.246 [2024-07-15 23:48:41.345678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.345827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.345855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.345871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.345892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.345912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.345925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.345953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.345983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 [2024-07-15 23:48:41.355764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.355942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.355977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.355993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.356015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.356041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.356055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.356068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.356086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.246 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 [2024-07-15 23:48:41.365848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.246 [2024-07-15 23:48:41.366045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.246 [2024-07-15 23:48:41.366073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7fc00 with addr=10.0.0.2, port=4420 00:22:06.246 [2024-07-15 23:48:41.366089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fc00 is same with the state(5) to be set 00:22:06.246 [2024-07-15 23:48:41.366110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fc00 (9): Bad file descriptor 00:22:06.246 [2024-07-15 23:48:41.366130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.246 [2024-07-15 23:48:41.366143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.246 [2024-07-15 23:48:41.366156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.246 [2024-07-15 23:48:41.366174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.505 [2024-07-15 23:48:41.371280] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:06.505 [2024-07-15 23:48:41.371309] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.505 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.506 23:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.880 [2024-07-15 23:48:42.658612] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:07.880 [2024-07-15 23:48:42.658639] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:07.880 [2024-07-15 23:48:42.658660] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:07.880 [2024-07-15 23:48:42.745923] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:07.880 [2024-07-15 23:48:42.813827] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:07.880 [2024-07-15 23:48:42.813860] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.880 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.880 request: 00:22:07.880 { 00:22:07.880 "name": "nvme", 00:22:07.880 "trtype": "tcp", 00:22:07.880 "traddr": "10.0.0.2", 00:22:07.880 "adrfam": "ipv4", 00:22:07.881 "trsvcid": "8009", 00:22:07.881 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:07.881 "wait_for_attach": true, 00:22:07.881 "method": "bdev_nvme_start_discovery", 00:22:07.881 "req_id": 1 00:22:07.881 } 00:22:07.881 Got JSON-RPC error response 00:22:07.881 response: 00:22:07.881 { 00:22:07.881 "code": -17, 00:22:07.881 "message": "File exists" 00:22:07.881 } 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.881 request: 00:22:07.881 { 00:22:07.881 "name": "nvme_second", 00:22:07.881 "trtype": "tcp", 00:22:07.881 "traddr": "10.0.0.2", 00:22:07.881 "adrfam": "ipv4", 00:22:07.881 "trsvcid": "8009", 00:22:07.881 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:07.881 "wait_for_attach": true, 00:22:07.881 "method": "bdev_nvme_start_discovery", 00:22:07.881 "req_id": 1 00:22:07.881 } 00:22:07.881 Got JSON-RPC error response 00:22:07.881 response: 00:22:07.881 { 00:22:07.881 "code": -17, 00:22:07.881 "message": "File exists" 00:22:07.881 } 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.881 23:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.254 [2024-07-15 23:48:44.005804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.254 [2024-07-15 23:48:44.005867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9ac90 with addr=10.0.0.2, port=8010 00:22:09.254 [2024-07-15 23:48:44.005894] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:09.254 [2024-07-15 23:48:44.005908] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:09.254 [2024-07-15 23:48:44.005920] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:10.189 [2024-07-15 23:48:45.008341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.189 [2024-07-15 23:48:45.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9ac90 with addr=10.0.0.2, port=8010 00:22:10.189 [2024-07-15 23:48:45.008441] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:10.189 [2024-07-15 23:48:45.008456] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.189 [2024-07-15 23:48:45.008468] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:11.124 [2024-07-15 23:48:46.010449] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:11.124 request: 00:22:11.124 { 00:22:11.124 "name": "nvme_second", 00:22:11.124 "trtype": "tcp", 00:22:11.124 "traddr": "10.0.0.2", 00:22:11.124 "adrfam": "ipv4", 00:22:11.124 "trsvcid": "8010", 00:22:11.124 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:11.124 "wait_for_attach": false, 00:22:11.124 "attach_timeout_ms": 3000, 00:22:11.124 "method": "bdev_nvme_start_discovery", 00:22:11.124 "req_id": 1 00:22:11.124 } 00:22:11.124 Got JSON-RPC error response 00:22:11.124 response: 00:22:11.124 { 00:22:11.124 "code": -110, 00:22:11.124 "message": "Connection timed out" 00:22:11.124 } 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3852303 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.124 rmmod nvme_tcp 00:22:11.124 rmmod nvme_fabrics 00:22:11.124 rmmod nvme_keyring 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3852169 ']' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3852169 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3852169 ']' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3852169 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852169 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852169' 00:22:11.124 killing process with pid 3852169 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3852169 00:22:11.124 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3852169 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.383 23:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.921 00:22:13.921 real 0m13.050s 00:22:13.921 user 0m18.757s 00:22:13.921 sys 0m2.808s 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.921 ************************************ 00:22:13.921 END TEST nvmf_host_discovery 00:22:13.921 ************************************ 00:22:13.921 23:48:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.921 23:48:48 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:13.921 23:48:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.921 23:48:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.921 23:48:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.921 ************************************ 00:22:13.921 START TEST nvmf_host_multipath_status 00:22:13.921 ************************************ 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:13.921 * Looking for test storage... 00:22:13.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.921 23:48:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:15.822 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.822 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:15.823 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:15.823 Found net devices under 0000:09:00.0: cvl_0_0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:15.823 Found net devices under 0000:09:00.1: cvl_0_1 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:22:15.823 00:22:15.823 --- 10.0.0.2 ping statistics --- 00:22:15.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.823 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:22:15.823 00:22:15.823 --- 10.0.0.1 ping statistics --- 00:22:15.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.823 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3855335 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3855335 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3855335 ']' 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:15.823 [2024-07-15 23:48:50.651545] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:15.823 [2024-07-15 23:48:50.651613] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.823 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.823 [2024-07-15 23:48:50.719119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:15.823 [2024-07-15 23:48:50.829387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.823 [2024-07-15 23:48:50.829446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.823 [2024-07-15 23:48:50.829478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.823 [2024-07-15 23:48:50.829490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.823 [2024-07-15 23:48:50.829500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.823 [2024-07-15 23:48:50.830976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.823 [2024-07-15 23:48:50.830986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.823 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3855335 00:22:16.081 23:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.081 [2024-07-15 23:48:51.195126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.339 23:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:16.597 Malloc0 00:22:16.597 23:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:16.855 23:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.113 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.370 [2024-07-15 23:48:52.244679] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.371 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:17.371 [2024-07-15 23:48:52.481327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3855504 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3855504 /var/tmp/bdevperf.sock 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3855504 ']' 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.629 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:17.887 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.887 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:17.887 23:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:18.145 23:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:18.710 Nvme0n1 00:22:18.710 23:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:18.967 Nvme0n1 00:22:18.967 23:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:18.967 23:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.521 23:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:21.521 23:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:21.521 23:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:21.521 23:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.896 23:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.155 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.155 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.155 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.155 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.414 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.414 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.414 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.414 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.672 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.672 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.672 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.672 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.930 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.930 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.930 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.930 23:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.188 23:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.188 23:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:24.188 23:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:24.446 23:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:24.705 23:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:25.648 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:25.648 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:25.648 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.648 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.905 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.905 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.905 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.905 23:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.162 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.162 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.162 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.162 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.421 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.421 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.421 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.421 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.679 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.679 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.679 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.679 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.938 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.938 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.938 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.938 23:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.196 23:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.196 23:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:27.196 23:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:27.454 23:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:27.711 23:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:28.644 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:28.644 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.644 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.644 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.902 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.902 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:28.902 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.902 23:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:29.159 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.159 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:29.159 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.159 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:29.417 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.417 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:29.417 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.417 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.675 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.675 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.675 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.675 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.933 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.933 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.933 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.933 23:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:30.191 23:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.191 23:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:30.191 23:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:30.449 23:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:30.706 23:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:31.640 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:31.640 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.640 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.640 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.898 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.898 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.898 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.898 23:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:32.156 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.156 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:32.156 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.156 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:32.414 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.414 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:32.414 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.414 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.672 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.672 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.672 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.672 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.929 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.929 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.929 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.929 23:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:33.187 23:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.187 23:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:33.187 23:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:33.444 23:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:33.702 23:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:34.635 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:34.635 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:34.635 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.635 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.893 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.893 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:34.893 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.893 23:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:35.151 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.151 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:35.151 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.151 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:35.408 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.408 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:35.408 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.408 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:35.665 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.665 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:35.665 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.665 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.924 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.924 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:35.924 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.924 23:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:36.182 23:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.182 23:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:36.182 23:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:36.440 23:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:36.697 23:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:37.667 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:37.667 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:37.667 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.667 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.925 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.925 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:37.925 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.925 23:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:38.183 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.183 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:38.183 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.183 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:38.440 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.440 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:38.440 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.440 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.696 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.696 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:38.696 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.696 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.954 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.954 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.954 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.954 23:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:39.210 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.210 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:39.466 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:39.466 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:39.722 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:39.978 23:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:40.908 23:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:40.908 23:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.908 23:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.908 23:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:41.165 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.165 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:41.165 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.165 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:41.422 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.422 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:41.422 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.422 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.680 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.680 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.680 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.680 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.938 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.938 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.938 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.938 23:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:42.196 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.196 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:42.196 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.196 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:42.454 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.454 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:42.454 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:42.713 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:42.971 23:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:43.906 23:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:43.906 23:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:43.906 23:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.906 23:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:44.165 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.165 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:44.165 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.165 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:44.423 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.423 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:44.423 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.423 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.681 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.681 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.681 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.681 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.939 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.939 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.939 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.939 23:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:45.197 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.197 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:45.197 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.197 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:45.456 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.456 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:45.456 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:45.714 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:45.972 23:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:46.907 23:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:46.907 23:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:46.907 23:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.907 23:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:47.165 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.165 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:47.165 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.165 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:47.423 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.423 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:47.423 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.423 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.681 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.681 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.681 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.681 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.939 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.939 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.939 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.939 23:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.197 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.197 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:48.197 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.197 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:48.455 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.455 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:48.455 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:48.713 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:48.971 23:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:49.905 23:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:49.905 23:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:49.905 23:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.905 23:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.163 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.163 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:50.163 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.163 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.421 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.421 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.421 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.421 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:50.680 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.680 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:50.680 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.680 23:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:50.939 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.939 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:50.939 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.939 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.197 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.197 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:51.197 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.197 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3855504 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3855504 ']' 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3855504 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855504 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855504' 00:22:51.455 killing process with pid 3855504 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3855504 00:22:51.455 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3855504 00:22:51.725 Connection closed with partial response: 00:22:51.725 00:22:51.725 00:22:51.725 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3855504 00:22:51.725 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:51.725 [2024-07-15 23:48:52.544838] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:51.725 [2024-07-15 23:48:52.544920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855504 ] 00:22:51.725 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.725 [2024-07-15 23:48:52.603909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.725 [2024-07-15 23:48:52.713438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.725 Running I/O for 90 seconds... 00:22:51.725 [2024-07-15 23:49:08.438208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.725 [2024-07-15 23:49:08.438293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.725 [2024-07-15 23:49:08.438814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.725 [2024-07-15 23:49:08.438837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.438852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.438874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.438890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.438912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.438928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.438965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.438983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.439949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.726 [2024-07-15 23:49:08.440458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.726 [2024-07-15 23:49:08.440474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.440953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.440985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.727 [2024-07-15 23:49:08.441657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.727 [2024-07-15 23:49:08.441691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.441921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.442839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.728 [2024-07-15 23:49:08.442882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.728 [2024-07-15 23:49:08.442929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.442993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.443011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.443039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.443056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.443084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.443100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.443144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.443172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-07-15 23:49:08.443188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.728 [2024-07-15 23:49:08.443216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.443953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.443984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:08.444399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:08.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:23.974815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:23.974883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:23.974921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:23.974940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:23.974973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-07-15 23:49:23.974991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.729 [2024-07-15 23:49:23.975014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.975921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.975943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.730 [2024-07-15 23:49:23.975979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.730 [2024-07-15 23:49:23.976021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.976063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.976102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.976140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-07-15 23:49:23.976177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.730 [2024-07-15 23:49:23.976199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.976502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.976518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.978979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.978996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.979019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.979035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.979057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.979073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.979094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.979110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.731 [2024-07-15 23:49:23.979138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-07-15 23:49:23.979154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.979519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.979658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.979674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.732 [2024-07-15 23:49:23.980665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.732 [2024-07-15 23:49:23.980687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.732 [2024-07-15 23:49:23.980702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.980975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.980993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.981031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.981069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.981609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.733 [2024-07-15 23:49:23.981648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.982126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.982171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.982209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.982247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.733 [2024-07-15 23:49:23.982285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.733 [2024-07-15 23:49:23.982308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.982839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.982913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.982934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.982950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.983001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.983018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.734 [2024-07-15 23:49:23.985760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.734 [2024-07-15 23:49:23.985893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.734 [2024-07-15 23:49:23.985909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.985930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.985945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.985992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.986659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.735 [2024-07-15 23:49:23.986696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.986718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.986733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.988762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.988809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.988847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.988886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.988970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.989003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.989042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.989065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.989080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.989103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.989120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.735 [2024-07-15 23:49:23.989141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.735 [2024-07-15 23:49:23.989157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.989734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.989972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.989996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.990011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.990033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.990049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.990071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.990086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.990108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.990123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.990145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.990162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.991080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.736 [2024-07-15 23:49:23.991105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.991132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.991150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.991195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.736 [2024-07-15 23:49:23.991248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.736 [2024-07-15 23:49:23.991271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.991547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.991850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.737 [2024-07-15 23:49:23.991880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.737 [2024-07-15 23:49:23.992700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.737 [2024-07-15 23:49:23.992720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.992759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.992796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.992833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.992871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.992908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.992945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.992975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.992992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.993326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.993361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.993396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.993431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.993452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.993466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.738 [2024-07-15 23:49:23.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.995705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.738 [2024-07-15 23:49:23.995741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.738 [2024-07-15 23:49:23.995761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.995776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.995796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.995811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.995831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.995846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.995866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.995880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.995900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.995915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.995950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:23.996634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:23.996690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-07-15 23:49:23.996705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:24.000083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:24.000130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:24.000168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:24.000205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.739 [2024-07-15 23:49:24.000258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.739 [2024-07-15 23:49:24.000280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.000694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.000971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.001028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.001332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-07-15 23:49:24.001407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.740 [2024-07-15 23:49:24.001429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.740 [2024-07-15 23:49:24.001445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.001883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.001962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.001980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.002010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.002027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.002049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.002065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.002876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.002900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.002927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.002945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.002979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.003452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.741 [2024-07-15 23:49:24.003468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.741 [2024-07-15 23:49:24.004911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.741 [2024-07-15 23:49:24.004937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.004971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.004990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.005918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.005953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.005979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.006003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.006019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.006041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.742 [2024-07-15 23:49:24.006057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.006078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.006098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.006120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.742 [2024-07-15 23:49:24.006136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.742 [2024-07-15 23:49:24.006158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.006426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.006628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.743 [2024-07-15 23:49:24.009740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.743 [2024-07-15 23:49:24.009762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.743 [2024-07-15 23:49:24.009778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.009805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.009837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.009860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.009875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.009911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.009927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.009971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.744 [2024-07-15 23:49:24.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.011005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.744 [2024-07-15 23:49:24.011027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.744 [2024-07-15 23:49:24.011043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.011080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.011118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.011333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.011350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.012068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.012263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.012301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.012338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.012648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.012663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.013098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.013143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.745 [2024-07-15 23:49:24.013182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.013219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.745 [2024-07-15 23:49:24.013271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.745 [2024-07-15 23:49:24.013295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.013310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.013365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.013684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.013721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.013759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.013775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.746 [2024-07-15 23:49:24.014895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.014976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.014995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.015017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.746 [2024-07-15 23:49:24.015033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.746 [2024-07-15 23:49:24.015054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.015070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.016891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.016915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.016962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.017765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.017821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.017836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.020381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.747 [2024-07-15 23:49:24.020406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.020448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.020465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.020517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.020538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.747 [2024-07-15 23:49:24.020568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.747 [2024-07-15 23:49:24.020590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.020964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.748 [2024-07-15 23:49:24.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.748 [2024-07-15 23:49:24.021688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.748 [2024-07-15 23:49:24.021709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.021729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.021939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.021961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.023595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.023953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.023992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.024032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.024085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.024121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.749 [2024-07-15 23:49:24.024158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.024212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.749 [2024-07-15 23:49:24.024249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.749 [2024-07-15 23:49:24.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.024287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.024309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.024325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.024350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.024367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.024388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.024404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.024425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.024441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.024463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.024479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.025634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.025649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.026177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.026222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.026260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.026298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.026340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.750 [2024-07-15 23:49:24.026379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.026416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.750 [2024-07-15 23:49:24.026454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.750 [2024-07-15 23:49:24.026469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.026522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.026558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.026593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.026627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.026678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.026715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.026752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.026773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.026788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.751 [2024-07-15 23:49:24.028877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.028970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.028994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.029010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.751 [2024-07-15 23:49:24.029030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.751 [2024-07-15 23:49:24.029046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.029195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.029233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.029390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.029411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.029426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.030919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.030945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.030994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.752 [2024-07-15 23:49:24.031323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.031361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.031399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.031436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.031473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.752 [2024-07-15 23:49:24.031511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.752 [2024-07-15 23:49:24.031527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.752 Received shutdown signal, test time was about 32.383176 seconds 00:22:51.752 00:22:51.752 Latency(us) 00:22:51.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.752 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.752 Verification LBA range: start 0x0 length 0x4000 00:22:51.752 Nvme0n1 : 32.38 7939.70 31.01 0.00 0.00 16094.80 179.01 4026531.84 00:22:51.752 =================================================================================================================== 00:22:51.752 Total : 7939.70 31.01 0.00 0.00 16094.80 179.01 4026531.84 00:22:51.752 23:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.031 rmmod nvme_tcp 00:22:52.031 rmmod nvme_fabrics 00:22:52.031 rmmod nvme_keyring 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3855335 ']' 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3855335 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3855335 ']' 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3855335 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.031 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855335 00:22:52.289 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.289 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.289 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855335' 00:22:52.289 killing process with pid 3855335 00:22:52.289 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3855335 00:22:52.289 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3855335 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.548 23:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.455 23:49:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.455 00:22:54.455 real 0m40.990s 00:22:54.455 user 2m3.990s 00:22:54.455 sys 0m10.273s 00:22:54.455 23:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.455 23:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 ************************************ 00:22:54.455 END TEST nvmf_host_multipath_status 00:22:54.455 ************************************ 00:22:54.455 23:49:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:54.455 23:49:29 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:54.455 23:49:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:54.455 23:49:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.455 23:49:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 ************************************ 00:22:54.455 START TEST nvmf_discovery_remove_ifc 00:22:54.455 ************************************ 00:22:54.455 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:54.714 * Looking for test storage... 00:22:54.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.714 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.715 23:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:56.616 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:56.616 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:56.616 Found net devices under 0000:09:00.0: cvl_0_0 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:56.616 Found net devices under 0000:09:00.1: cvl_0_1 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.616 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:22:56.875 00:22:56.875 --- 10.0.0.2 ping statistics --- 00:22:56.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.875 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:22:56.875 00:22:56.875 --- 10.0.0.1 ping statistics --- 00:22:56.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.875 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3861707 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3861707 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3861707 ']' 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.875 23:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.875 [2024-07-15 23:49:31.912628] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:56.875 [2024-07-15 23:49:31.912699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.875 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.875 [2024-07-15 23:49:31.975867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.134 [2024-07-15 23:49:32.082855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.134 [2024-07-15 23:49:32.082907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.134 [2024-07-15 23:49:32.082935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.134 [2024-07-15 23:49:32.082947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.134 [2024-07-15 23:49:32.082964] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.134 [2024-07-15 23:49:32.083007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.134 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.134 [2024-07-15 23:49:32.220837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.134 [2024-07-15 23:49:32.229021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:57.134 null0 00:22:57.392 [2024-07-15 23:49:32.261019] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3861845 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3861845 /tmp/host.sock 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3861845 ']' 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:57.392 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.392 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.392 [2024-07-15 23:49:32.324749] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:22:57.392 [2024-07-15 23:49:32.324833] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861845 ] 00:22:57.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.392 [2024-07-15 23:49:32.381397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.392 [2024-07-15 23:49:32.485496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.650 23:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.582 [2024-07-15 23:49:33.688713] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:58.582 [2024-07-15 23:49:33.688736] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:58.582 [2024-07-15 23:49:33.688756] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.839 [2024-07-15 23:49:33.816210] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:59.097 [2024-07-15 23:49:34.001849] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:59.097 [2024-07-15 23:49:34.001901] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:59.097 [2024-07-15 23:49:34.001952] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:59.097 [2024-07-15 23:49:34.001984] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:59.097 [2024-07-15 23:49:34.002008] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.097 [2024-07-15 23:49:34.007100] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e0d870 was disconnected and freed. delete nvme_qpair. 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:59.097 23:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.029 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.287 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.287 23:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.220 23:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.151 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.408 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:02.408 23:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:03.337 23:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:04.266 23:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.522 [2024-07-15 23:49:39.443309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:04.522 [2024-07-15 23:49:39.443390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.522 [2024-07-15 23:49:39.443411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.522 [2024-07-15 23:49:39.443427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.522 [2024-07-15 23:49:39.443441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.522 [2024-07-15 23:49:39.443454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.522 [2024-07-15 23:49:39.443467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.522 [2024-07-15 23:49:39.443480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.522 [2024-07-15 23:49:39.443492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.522 [2024-07-15 23:49:39.443505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.522 [2024-07-15 23:49:39.443518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.522 [2024-07-15 23:49:39.443530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd4300 is same with the state(5) to be set 00:23:04.522 [2024-07-15 23:49:39.453326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd4300 (9): Bad file descriptor 00:23:04.522 [2024-07-15 23:49:39.463371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:05.455 [2024-07-15 23:49:40.505997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:05.455 [2024-07-15 23:49:40.506074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd4300 with addr=10.0.0.2, port=4420 00:23:05.455 [2024-07-15 23:49:40.506099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd4300 is same with the state(5) to be set 00:23:05.455 [2024-07-15 23:49:40.506140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd4300 (9): Bad file descriptor 00:23:05.455 [2024-07-15 23:49:40.506537] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:05.455 [2024-07-15 23:49:40.506564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.455 [2024-07-15 23:49:40.506579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.455 [2024-07-15 23:49:40.506604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.455 [2024-07-15 23:49:40.506633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.455 [2024-07-15 23:49:40.506649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:05.455 23:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:06.389 [2024-07-15 23:49:41.509146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:06.389 [2024-07-15 23:49:41.509200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:06.389 [2024-07-15 23:49:41.509229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:06.389 [2024-07-15 23:49:41.509243] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:06.389 [2024-07-15 23:49:41.509271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:06.389 [2024-07-15 23:49:41.509320] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:06.389 [2024-07-15 23:49:41.509372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.389 [2024-07-15 23:49:41.509394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.389 [2024-07-15 23:49:41.509421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.389 [2024-07-15 23:49:41.509440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.389 [2024-07-15 23:49:41.509455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.389 [2024-07-15 23:49:41.509469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.389 [2024-07-15 23:49:41.509483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.389 [2024-07-15 23:49:41.509501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.389 [2024-07-15 23:49:41.509525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.389 [2024-07-15 23:49:41.509546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.389 [2024-07-15 23:49:41.509567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:06.389 [2024-07-15 23:49:41.509679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd3780 (9): Bad file descriptor 00:23:06.389 [2024-07-15 23:49:41.510658] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:06.389 [2024-07-15 23:49:41.510680] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:06.647 23:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:07.579 23:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.512 [2024-07-15 23:49:43.564138] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:08.512 [2024-07-15 23:49:43.564161] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:08.512 [2024-07-15 23:49:43.564184] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.769 [2024-07-15 23:49:43.650532] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:08.769 23:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.769 [2024-07-15 23:49:43.748398] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:08.769 [2024-07-15 23:49:43.748441] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:08.769 [2024-07-15 23:49:43.748471] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:08.769 [2024-07-15 23:49:43.748490] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:08.769 [2024-07-15 23:49:43.748502] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:08.769 [2024-07-15 23:49:43.752761] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ddb110 was disconnected and freed. delete nvme_qpair. 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.702 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3861845 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3861845 ']' 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3861845 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861845 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861845' 00:23:09.703 killing process with pid 3861845 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3861845 00:23:09.703 23:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3861845 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.983 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.983 rmmod nvme_tcp 00:23:10.257 rmmod nvme_fabrics 00:23:10.257 rmmod nvme_keyring 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3861707 ']' 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3861707 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3861707 ']' 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3861707 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861707 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861707' 00:23:10.257 killing process with pid 3861707 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3861707 00:23:10.257 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3861707 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.517 23:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.425 23:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.425 00:23:12.425 real 0m17.930s 00:23:12.425 user 0m25.857s 00:23:12.425 sys 0m3.133s 00:23:12.425 23:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.425 23:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.425 ************************************ 00:23:12.425 END TEST nvmf_discovery_remove_ifc 00:23:12.425 ************************************ 00:23:12.425 23:49:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.425 23:49:47 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:12.425 23:49:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.425 23:49:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.425 23:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.425 ************************************ 00:23:12.425 START TEST nvmf_identify_kernel_target 00:23:12.425 ************************************ 00:23:12.425 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:12.684 * Looking for test storage... 00:23:12.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.684 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.685 23:49:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:14.588 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:14.588 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:14.589 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:14.589 Found net devices under 0000:09:00.0: cvl_0_0 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:14.589 Found net devices under 0000:09:00.1: cvl_0_1 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.589 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:23:14.848 00:23:14.848 --- 10.0.0.2 ping statistics --- 00:23:14.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.848 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:23:14.848 00:23:14.848 --- 10.0.0.1 ping statistics --- 00:23:14.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.848 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:14.848 23:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:16.223 Waiting for block devices as requested 00:23:16.223 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:16.223 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:16.223 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:16.483 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:16.483 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:16.483 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:16.483 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:16.742 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:16.742 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:17.001 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:17.001 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:17.001 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:17.001 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:17.001 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.260 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.260 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:17.260 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:17.519 No valid GPT data, bailing 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:17.519 00:23:17.519 Discovery Log Number of Records 2, Generation counter 2 00:23:17.519 =====Discovery Log Entry 0====== 00:23:17.519 trtype: tcp 00:23:17.519 adrfam: ipv4 00:23:17.519 subtype: current discovery subsystem 00:23:17.519 treq: not specified, sq flow control disable supported 00:23:17.519 portid: 1 00:23:17.519 trsvcid: 4420 00:23:17.519 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:17.519 traddr: 10.0.0.1 00:23:17.519 eflags: none 00:23:17.519 sectype: none 00:23:17.519 =====Discovery Log Entry 1====== 00:23:17.519 trtype: tcp 00:23:17.519 adrfam: ipv4 00:23:17.519 subtype: nvme subsystem 00:23:17.519 treq: not specified, sq flow control disable supported 00:23:17.519 portid: 1 00:23:17.519 trsvcid: 4420 00:23:17.519 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:17.519 traddr: 10.0.0.1 00:23:17.519 eflags: none 00:23:17.519 sectype: none 00:23:17.519 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:17.519 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:17.519 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.779 ===================================================== 00:23:17.779 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:17.779 ===================================================== 00:23:17.779 Controller Capabilities/Features 00:23:17.779 ================================ 00:23:17.779 Vendor ID: 0000 00:23:17.779 Subsystem Vendor ID: 0000 00:23:17.779 Serial Number: 998de12eff7c850ea645 00:23:17.779 Model Number: Linux 00:23:17.779 Firmware Version: 6.7.0-68 00:23:17.779 Recommended Arb Burst: 0 00:23:17.779 IEEE OUI Identifier: 00 00 00 00:23:17.779 Multi-path I/O 00:23:17.779 May have multiple subsystem ports: No 00:23:17.779 May have multiple controllers: No 00:23:17.779 Associated with SR-IOV VF: No 00:23:17.779 Max Data Transfer Size: Unlimited 00:23:17.779 Max Number of Namespaces: 0 00:23:17.779 Max Number of I/O Queues: 1024 00:23:17.779 NVMe Specification Version (VS): 1.3 00:23:17.779 NVMe Specification Version (Identify): 1.3 00:23:17.779 Maximum Queue Entries: 1024 00:23:17.779 Contiguous Queues Required: No 00:23:17.779 Arbitration Mechanisms Supported 00:23:17.779 Weighted Round Robin: Not Supported 00:23:17.779 Vendor Specific: Not Supported 00:23:17.779 Reset Timeout: 7500 ms 00:23:17.779 Doorbell Stride: 4 bytes 00:23:17.779 NVM Subsystem Reset: Not Supported 00:23:17.779 Command Sets Supported 00:23:17.779 NVM Command Set: Supported 00:23:17.779 Boot Partition: Not Supported 00:23:17.779 Memory Page Size Minimum: 4096 bytes 00:23:17.779 Memory Page Size Maximum: 4096 bytes 00:23:17.779 Persistent Memory Region: Not Supported 00:23:17.779 Optional Asynchronous Events Supported 00:23:17.779 Namespace Attribute Notices: Not Supported 00:23:17.779 Firmware Activation Notices: Not Supported 00:23:17.779 ANA Change Notices: Not Supported 00:23:17.779 PLE Aggregate Log Change Notices: Not Supported 00:23:17.779 LBA Status Info Alert Notices: Not Supported 00:23:17.779 EGE Aggregate Log Change Notices: Not Supported 00:23:17.779 Normal NVM Subsystem Shutdown event: Not Supported 00:23:17.779 Zone Descriptor Change Notices: Not Supported 00:23:17.779 Discovery Log Change Notices: Supported 00:23:17.779 Controller Attributes 00:23:17.779 128-bit Host Identifier: Not Supported 00:23:17.779 Non-Operational Permissive Mode: Not Supported 00:23:17.779 NVM Sets: Not Supported 00:23:17.779 Read Recovery Levels: Not Supported 00:23:17.779 Endurance Groups: Not Supported 00:23:17.779 Predictable Latency Mode: Not Supported 00:23:17.779 Traffic Based Keep ALive: Not Supported 00:23:17.779 Namespace Granularity: Not Supported 00:23:17.779 SQ Associations: Not Supported 00:23:17.779 UUID List: Not Supported 00:23:17.779 Multi-Domain Subsystem: Not Supported 00:23:17.779 Fixed Capacity Management: Not Supported 00:23:17.779 Variable Capacity Management: Not Supported 00:23:17.779 Delete Endurance Group: Not Supported 00:23:17.779 Delete NVM Set: Not Supported 00:23:17.779 Extended LBA Formats Supported: Not Supported 00:23:17.779 Flexible Data Placement Supported: Not Supported 00:23:17.779 00:23:17.779 Controller Memory Buffer Support 00:23:17.779 ================================ 00:23:17.779 Supported: No 00:23:17.779 00:23:17.779 Persistent Memory Region Support 00:23:17.779 ================================ 00:23:17.779 Supported: No 00:23:17.779 00:23:17.779 Admin Command Set Attributes 00:23:17.779 ============================ 00:23:17.779 Security Send/Receive: Not Supported 00:23:17.779 Format NVM: Not Supported 00:23:17.779 Firmware Activate/Download: Not Supported 00:23:17.779 Namespace Management: Not Supported 00:23:17.779 Device Self-Test: Not Supported 00:23:17.779 Directives: Not Supported 00:23:17.779 NVMe-MI: Not Supported 00:23:17.779 Virtualization Management: Not Supported 00:23:17.779 Doorbell Buffer Config: Not Supported 00:23:17.779 Get LBA Status Capability: Not Supported 00:23:17.779 Command & Feature Lockdown Capability: Not Supported 00:23:17.779 Abort Command Limit: 1 00:23:17.779 Async Event Request Limit: 1 00:23:17.779 Number of Firmware Slots: N/A 00:23:17.779 Firmware Slot 1 Read-Only: N/A 00:23:17.779 Firmware Activation Without Reset: N/A 00:23:17.779 Multiple Update Detection Support: N/A 00:23:17.779 Firmware Update Granularity: No Information Provided 00:23:17.779 Per-Namespace SMART Log: No 00:23:17.779 Asymmetric Namespace Access Log Page: Not Supported 00:23:17.779 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:17.779 Command Effects Log Page: Not Supported 00:23:17.779 Get Log Page Extended Data: Supported 00:23:17.779 Telemetry Log Pages: Not Supported 00:23:17.779 Persistent Event Log Pages: Not Supported 00:23:17.779 Supported Log Pages Log Page: May Support 00:23:17.779 Commands Supported & Effects Log Page: Not Supported 00:23:17.779 Feature Identifiers & Effects Log Page:May Support 00:23:17.779 NVMe-MI Commands & Effects Log Page: May Support 00:23:17.779 Data Area 4 for Telemetry Log: Not Supported 00:23:17.779 Error Log Page Entries Supported: 1 00:23:17.779 Keep Alive: Not Supported 00:23:17.779 00:23:17.779 NVM Command Set Attributes 00:23:17.779 ========================== 00:23:17.779 Submission Queue Entry Size 00:23:17.779 Max: 1 00:23:17.779 Min: 1 00:23:17.779 Completion Queue Entry Size 00:23:17.779 Max: 1 00:23:17.779 Min: 1 00:23:17.779 Number of Namespaces: 0 00:23:17.779 Compare Command: Not Supported 00:23:17.779 Write Uncorrectable Command: Not Supported 00:23:17.779 Dataset Management Command: Not Supported 00:23:17.779 Write Zeroes Command: Not Supported 00:23:17.779 Set Features Save Field: Not Supported 00:23:17.779 Reservations: Not Supported 00:23:17.779 Timestamp: Not Supported 00:23:17.779 Copy: Not Supported 00:23:17.779 Volatile Write Cache: Not Present 00:23:17.779 Atomic Write Unit (Normal): 1 00:23:17.779 Atomic Write Unit (PFail): 1 00:23:17.779 Atomic Compare & Write Unit: 1 00:23:17.779 Fused Compare & Write: Not Supported 00:23:17.779 Scatter-Gather List 00:23:17.779 SGL Command Set: Supported 00:23:17.779 SGL Keyed: Not Supported 00:23:17.779 SGL Bit Bucket Descriptor: Not Supported 00:23:17.779 SGL Metadata Pointer: Not Supported 00:23:17.779 Oversized SGL: Not Supported 00:23:17.779 SGL Metadata Address: Not Supported 00:23:17.779 SGL Offset: Supported 00:23:17.779 Transport SGL Data Block: Not Supported 00:23:17.779 Replay Protected Memory Block: Not Supported 00:23:17.779 00:23:17.779 Firmware Slot Information 00:23:17.779 ========================= 00:23:17.779 Active slot: 0 00:23:17.779 00:23:17.779 00:23:17.779 Error Log 00:23:17.779 ========= 00:23:17.779 00:23:17.779 Active Namespaces 00:23:17.779 ================= 00:23:17.779 Discovery Log Page 00:23:17.779 ================== 00:23:17.779 Generation Counter: 2 00:23:17.779 Number of Records: 2 00:23:17.779 Record Format: 0 00:23:17.779 00:23:17.779 Discovery Log Entry 0 00:23:17.779 ---------------------- 00:23:17.779 Transport Type: 3 (TCP) 00:23:17.779 Address Family: 1 (IPv4) 00:23:17.779 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:17.779 Entry Flags: 00:23:17.779 Duplicate Returned Information: 0 00:23:17.779 Explicit Persistent Connection Support for Discovery: 0 00:23:17.779 Transport Requirements: 00:23:17.779 Secure Channel: Not Specified 00:23:17.779 Port ID: 1 (0x0001) 00:23:17.779 Controller ID: 65535 (0xffff) 00:23:17.779 Admin Max SQ Size: 32 00:23:17.780 Transport Service Identifier: 4420 00:23:17.780 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:17.780 Transport Address: 10.0.0.1 00:23:17.780 Discovery Log Entry 1 00:23:17.780 ---------------------- 00:23:17.780 Transport Type: 3 (TCP) 00:23:17.780 Address Family: 1 (IPv4) 00:23:17.780 Subsystem Type: 2 (NVM Subsystem) 00:23:17.780 Entry Flags: 00:23:17.780 Duplicate Returned Information: 0 00:23:17.780 Explicit Persistent Connection Support for Discovery: 0 00:23:17.780 Transport Requirements: 00:23:17.780 Secure Channel: Not Specified 00:23:17.780 Port ID: 1 (0x0001) 00:23:17.780 Controller ID: 65535 (0xffff) 00:23:17.780 Admin Max SQ Size: 32 00:23:17.780 Transport Service Identifier: 4420 00:23:17.780 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:17.780 Transport Address: 10.0.0.1 00:23:17.780 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:17.780 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.780 get_feature(0x01) failed 00:23:17.780 get_feature(0x02) failed 00:23:17.780 get_feature(0x04) failed 00:23:17.780 ===================================================== 00:23:17.780 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:17.780 ===================================================== 00:23:17.780 Controller Capabilities/Features 00:23:17.780 ================================ 00:23:17.780 Vendor ID: 0000 00:23:17.780 Subsystem Vendor ID: 0000 00:23:17.780 Serial Number: 7cf1b6bffb71a45d2412 00:23:17.780 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:17.780 Firmware Version: 6.7.0-68 00:23:17.780 Recommended Arb Burst: 6 00:23:17.780 IEEE OUI Identifier: 00 00 00 00:23:17.780 Multi-path I/O 00:23:17.780 May have multiple subsystem ports: Yes 00:23:17.780 May have multiple controllers: Yes 00:23:17.780 Associated with SR-IOV VF: No 00:23:17.780 Max Data Transfer Size: Unlimited 00:23:17.780 Max Number of Namespaces: 1024 00:23:17.780 Max Number of I/O Queues: 128 00:23:17.780 NVMe Specification Version (VS): 1.3 00:23:17.780 NVMe Specification Version (Identify): 1.3 00:23:17.780 Maximum Queue Entries: 1024 00:23:17.780 Contiguous Queues Required: No 00:23:17.780 Arbitration Mechanisms Supported 00:23:17.780 Weighted Round Robin: Not Supported 00:23:17.780 Vendor Specific: Not Supported 00:23:17.780 Reset Timeout: 7500 ms 00:23:17.780 Doorbell Stride: 4 bytes 00:23:17.780 NVM Subsystem Reset: Not Supported 00:23:17.780 Command Sets Supported 00:23:17.780 NVM Command Set: Supported 00:23:17.780 Boot Partition: Not Supported 00:23:17.780 Memory Page Size Minimum: 4096 bytes 00:23:17.780 Memory Page Size Maximum: 4096 bytes 00:23:17.780 Persistent Memory Region: Not Supported 00:23:17.780 Optional Asynchronous Events Supported 00:23:17.780 Namespace Attribute Notices: Supported 00:23:17.780 Firmware Activation Notices: Not Supported 00:23:17.780 ANA Change Notices: Supported 00:23:17.780 PLE Aggregate Log Change Notices: Not Supported 00:23:17.780 LBA Status Info Alert Notices: Not Supported 00:23:17.780 EGE Aggregate Log Change Notices: Not Supported 00:23:17.780 Normal NVM Subsystem Shutdown event: Not Supported 00:23:17.780 Zone Descriptor Change Notices: Not Supported 00:23:17.780 Discovery Log Change Notices: Not Supported 00:23:17.780 Controller Attributes 00:23:17.780 128-bit Host Identifier: Supported 00:23:17.780 Non-Operational Permissive Mode: Not Supported 00:23:17.780 NVM Sets: Not Supported 00:23:17.780 Read Recovery Levels: Not Supported 00:23:17.780 Endurance Groups: Not Supported 00:23:17.780 Predictable Latency Mode: Not Supported 00:23:17.780 Traffic Based Keep ALive: Supported 00:23:17.780 Namespace Granularity: Not Supported 00:23:17.780 SQ Associations: Not Supported 00:23:17.780 UUID List: Not Supported 00:23:17.780 Multi-Domain Subsystem: Not Supported 00:23:17.780 Fixed Capacity Management: Not Supported 00:23:17.780 Variable Capacity Management: Not Supported 00:23:17.780 Delete Endurance Group: Not Supported 00:23:17.780 Delete NVM Set: Not Supported 00:23:17.780 Extended LBA Formats Supported: Not Supported 00:23:17.780 Flexible Data Placement Supported: Not Supported 00:23:17.780 00:23:17.780 Controller Memory Buffer Support 00:23:17.780 ================================ 00:23:17.780 Supported: No 00:23:17.780 00:23:17.780 Persistent Memory Region Support 00:23:17.780 ================================ 00:23:17.780 Supported: No 00:23:17.780 00:23:17.780 Admin Command Set Attributes 00:23:17.780 ============================ 00:23:17.780 Security Send/Receive: Not Supported 00:23:17.780 Format NVM: Not Supported 00:23:17.780 Firmware Activate/Download: Not Supported 00:23:17.780 Namespace Management: Not Supported 00:23:17.780 Device Self-Test: Not Supported 00:23:17.780 Directives: Not Supported 00:23:17.780 NVMe-MI: Not Supported 00:23:17.780 Virtualization Management: Not Supported 00:23:17.780 Doorbell Buffer Config: Not Supported 00:23:17.780 Get LBA Status Capability: Not Supported 00:23:17.780 Command & Feature Lockdown Capability: Not Supported 00:23:17.780 Abort Command Limit: 4 00:23:17.780 Async Event Request Limit: 4 00:23:17.780 Number of Firmware Slots: N/A 00:23:17.780 Firmware Slot 1 Read-Only: N/A 00:23:17.780 Firmware Activation Without Reset: N/A 00:23:17.780 Multiple Update Detection Support: N/A 00:23:17.780 Firmware Update Granularity: No Information Provided 00:23:17.780 Per-Namespace SMART Log: Yes 00:23:17.780 Asymmetric Namespace Access Log Page: Supported 00:23:17.780 ANA Transition Time : 10 sec 00:23:17.780 00:23:17.780 Asymmetric Namespace Access Capabilities 00:23:17.780 ANA Optimized State : Supported 00:23:17.780 ANA Non-Optimized State : Supported 00:23:17.780 ANA Inaccessible State : Supported 00:23:17.780 ANA Persistent Loss State : Supported 00:23:17.780 ANA Change State : Supported 00:23:17.780 ANAGRPID is not changed : No 00:23:17.780 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:17.780 00:23:17.780 ANA Group Identifier Maximum : 128 00:23:17.780 Number of ANA Group Identifiers : 128 00:23:17.780 Max Number of Allowed Namespaces : 1024 00:23:17.780 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:17.780 Command Effects Log Page: Supported 00:23:17.780 Get Log Page Extended Data: Supported 00:23:17.780 Telemetry Log Pages: Not Supported 00:23:17.780 Persistent Event Log Pages: Not Supported 00:23:17.780 Supported Log Pages Log Page: May Support 00:23:17.780 Commands Supported & Effects Log Page: Not Supported 00:23:17.780 Feature Identifiers & Effects Log Page:May Support 00:23:17.780 NVMe-MI Commands & Effects Log Page: May Support 00:23:17.780 Data Area 4 for Telemetry Log: Not Supported 00:23:17.780 Error Log Page Entries Supported: 128 00:23:17.780 Keep Alive: Supported 00:23:17.780 Keep Alive Granularity: 1000 ms 00:23:17.780 00:23:17.780 NVM Command Set Attributes 00:23:17.780 ========================== 00:23:17.780 Submission Queue Entry Size 00:23:17.780 Max: 64 00:23:17.780 Min: 64 00:23:17.780 Completion Queue Entry Size 00:23:17.780 Max: 16 00:23:17.780 Min: 16 00:23:17.780 Number of Namespaces: 1024 00:23:17.780 Compare Command: Not Supported 00:23:17.780 Write Uncorrectable Command: Not Supported 00:23:17.780 Dataset Management Command: Supported 00:23:17.780 Write Zeroes Command: Supported 00:23:17.780 Set Features Save Field: Not Supported 00:23:17.780 Reservations: Not Supported 00:23:17.780 Timestamp: Not Supported 00:23:17.780 Copy: Not Supported 00:23:17.780 Volatile Write Cache: Present 00:23:17.780 Atomic Write Unit (Normal): 1 00:23:17.780 Atomic Write Unit (PFail): 1 00:23:17.780 Atomic Compare & Write Unit: 1 00:23:17.780 Fused Compare & Write: Not Supported 00:23:17.780 Scatter-Gather List 00:23:17.780 SGL Command Set: Supported 00:23:17.781 SGL Keyed: Not Supported 00:23:17.781 SGL Bit Bucket Descriptor: Not Supported 00:23:17.781 SGL Metadata Pointer: Not Supported 00:23:17.781 Oversized SGL: Not Supported 00:23:17.781 SGL Metadata Address: Not Supported 00:23:17.781 SGL Offset: Supported 00:23:17.781 Transport SGL Data Block: Not Supported 00:23:17.781 Replay Protected Memory Block: Not Supported 00:23:17.781 00:23:17.781 Firmware Slot Information 00:23:17.781 ========================= 00:23:17.781 Active slot: 0 00:23:17.781 00:23:17.781 Asymmetric Namespace Access 00:23:17.781 =========================== 00:23:17.781 Change Count : 0 00:23:17.781 Number of ANA Group Descriptors : 1 00:23:17.781 ANA Group Descriptor : 0 00:23:17.781 ANA Group ID : 1 00:23:17.781 Number of NSID Values : 1 00:23:17.781 Change Count : 0 00:23:17.781 ANA State : 1 00:23:17.781 Namespace Identifier : 1 00:23:17.781 00:23:17.781 Commands Supported and Effects 00:23:17.781 ============================== 00:23:17.781 Admin Commands 00:23:17.781 -------------- 00:23:17.781 Get Log Page (02h): Supported 00:23:17.781 Identify (06h): Supported 00:23:17.781 Abort (08h): Supported 00:23:17.781 Set Features (09h): Supported 00:23:17.781 Get Features (0Ah): Supported 00:23:17.781 Asynchronous Event Request (0Ch): Supported 00:23:17.781 Keep Alive (18h): Supported 00:23:17.781 I/O Commands 00:23:17.781 ------------ 00:23:17.781 Flush (00h): Supported 00:23:17.781 Write (01h): Supported LBA-Change 00:23:17.781 Read (02h): Supported 00:23:17.781 Write Zeroes (08h): Supported LBA-Change 00:23:17.781 Dataset Management (09h): Supported 00:23:17.781 00:23:17.781 Error Log 00:23:17.781 ========= 00:23:17.781 Entry: 0 00:23:17.781 Error Count: 0x3 00:23:17.781 Submission Queue Id: 0x0 00:23:17.781 Command Id: 0x5 00:23:17.781 Phase Bit: 0 00:23:17.781 Status Code: 0x2 00:23:17.781 Status Code Type: 0x0 00:23:17.781 Do Not Retry: 1 00:23:17.781 Error Location: 0x28 00:23:17.781 LBA: 0x0 00:23:17.781 Namespace: 0x0 00:23:17.781 Vendor Log Page: 0x0 00:23:17.781 ----------- 00:23:17.781 Entry: 1 00:23:17.781 Error Count: 0x2 00:23:17.781 Submission Queue Id: 0x0 00:23:17.781 Command Id: 0x5 00:23:17.781 Phase Bit: 0 00:23:17.781 Status Code: 0x2 00:23:17.781 Status Code Type: 0x0 00:23:17.781 Do Not Retry: 1 00:23:17.781 Error Location: 0x28 00:23:17.781 LBA: 0x0 00:23:17.781 Namespace: 0x0 00:23:17.781 Vendor Log Page: 0x0 00:23:17.781 ----------- 00:23:17.781 Entry: 2 00:23:17.781 Error Count: 0x1 00:23:17.781 Submission Queue Id: 0x0 00:23:17.781 Command Id: 0x4 00:23:17.781 Phase Bit: 0 00:23:17.781 Status Code: 0x2 00:23:17.781 Status Code Type: 0x0 00:23:17.781 Do Not Retry: 1 00:23:17.781 Error Location: 0x28 00:23:17.781 LBA: 0x0 00:23:17.781 Namespace: 0x0 00:23:17.781 Vendor Log Page: 0x0 00:23:17.781 00:23:17.781 Number of Queues 00:23:17.781 ================ 00:23:17.781 Number of I/O Submission Queues: 128 00:23:17.781 Number of I/O Completion Queues: 128 00:23:17.781 00:23:17.781 ZNS Specific Controller Data 00:23:17.781 ============================ 00:23:17.781 Zone Append Size Limit: 0 00:23:17.781 00:23:17.781 00:23:17.781 Active Namespaces 00:23:17.781 ================= 00:23:17.781 get_feature(0x05) failed 00:23:17.781 Namespace ID:1 00:23:17.781 Command Set Identifier: NVM (00h) 00:23:17.781 Deallocate: Supported 00:23:17.781 Deallocated/Unwritten Error: Not Supported 00:23:17.781 Deallocated Read Value: Unknown 00:23:17.781 Deallocate in Write Zeroes: Not Supported 00:23:17.781 Deallocated Guard Field: 0xFFFF 00:23:17.781 Flush: Supported 00:23:17.781 Reservation: Not Supported 00:23:17.781 Namespace Sharing Capabilities: Multiple Controllers 00:23:17.781 Size (in LBAs): 1953525168 (931GiB) 00:23:17.781 Capacity (in LBAs): 1953525168 (931GiB) 00:23:17.781 Utilization (in LBAs): 1953525168 (931GiB) 00:23:17.781 UUID: 85903b14-37ec-4ef6-8d85-2f8f3476af4f 00:23:17.781 Thin Provisioning: Not Supported 00:23:17.781 Per-NS Atomic Units: Yes 00:23:17.781 Atomic Boundary Size (Normal): 0 00:23:17.781 Atomic Boundary Size (PFail): 0 00:23:17.781 Atomic Boundary Offset: 0 00:23:17.781 NGUID/EUI64 Never Reused: No 00:23:17.781 ANA group ID: 1 00:23:17.781 Namespace Write Protected: No 00:23:17.781 Number of LBA Formats: 1 00:23:17.781 Current LBA Format: LBA Format #00 00:23:17.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:17.781 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.781 rmmod nvme_tcp 00:23:17.781 rmmod nvme_fabrics 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.781 23:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:20.315 23:49:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:21.248 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.248 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.248 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:22.184 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:22.184 00:23:22.184 real 0m9.774s 00:23:22.184 user 0m2.055s 00:23:22.184 sys 0m3.689s 00:23:22.184 23:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.184 23:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.184 ************************************ 00:23:22.184 END TEST nvmf_identify_kernel_target 00:23:22.184 ************************************ 00:23:22.445 23:49:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:22.445 23:49:57 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.445 23:49:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.445 23:49:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.445 23:49:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.445 ************************************ 00:23:22.445 START TEST nvmf_auth_host 00:23:22.445 ************************************ 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.445 * Looking for test storage... 00:23:22.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.445 23:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:24.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:24.977 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:24.977 Found net devices under 0000:09:00.0: cvl_0_0 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:24.977 Found net devices under 0000:09:00.1: cvl_0_1 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:23:24.977 00:23:24.977 --- 10.0.0.2 ping statistics --- 00:23:24.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.977 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:24.977 00:23:24.977 --- 10.0.0.1 ping statistics --- 00:23:24.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.977 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:24.977 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3868939 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3868939 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3868939 ']' 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.978 23:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2eccaf9e60e06ddb515c8669ccabd801 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h6V 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2eccaf9e60e06ddb515c8669ccabd801 0 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2eccaf9e60e06ddb515c8669ccabd801 0 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2eccaf9e60e06ddb515c8669ccabd801 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h6V 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h6V 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.h6V 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b09806abd2e18aae9a9b861a0b998dd06e8f8ba3aed6e593b51c71d4b748990 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.b5T 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b09806abd2e18aae9a9b861a0b998dd06e8f8ba3aed6e593b51c71d4b748990 3 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b09806abd2e18aae9a9b861a0b998dd06e8f8ba3aed6e593b51c71d4b748990 3 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b09806abd2e18aae9a9b861a0b998dd06e8f8ba3aed6e593b51c71d4b748990 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:24.978 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.b5T 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.b5T 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.b5T 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1124821423ca180ae89b6bfa8b040f138cebc98fe6b78d94 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QO6 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1124821423ca180ae89b6bfa8b040f138cebc98fe6b78d94 0 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1124821423ca180ae89b6bfa8b040f138cebc98fe6b78d94 0 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1124821423ca180ae89b6bfa8b040f138cebc98fe6b78d94 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.236 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QO6 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QO6 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QO6 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afba77ab5a1af9bac2b2439c507fbab8988b2879f62c62a0 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wTE 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afba77ab5a1af9bac2b2439c507fbab8988b2879f62c62a0 2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afba77ab5a1af9bac2b2439c507fbab8988b2879f62c62a0 2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afba77ab5a1af9bac2b2439c507fbab8988b2879f62c62a0 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wTE 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wTE 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wTE 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4de21e93e67db134b0c15fba502fd94b 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vQC 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4de21e93e67db134b0c15fba502fd94b 1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4de21e93e67db134b0c15fba502fd94b 1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4de21e93e67db134b0c15fba502fd94b 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vQC 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vQC 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vQC 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5120933e06a60948d3093ba055c33db 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XWO 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5120933e06a60948d3093ba055c33db 1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5120933e06a60948d3093ba055c33db 1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5120933e06a60948d3093ba055c33db 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XWO 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XWO 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XWO 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=383d97088247c81f05c0d712275cac126d8d28949294f60e 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lDY 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 383d97088247c81f05c0d712275cac126d8d28949294f60e 2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 383d97088247c81f05c0d712275cac126d8d28949294f60e 2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=383d97088247c81f05c0d712275cac126d8d28949294f60e 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.237 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lDY 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lDY 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lDY 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=373adff7eb0a0c5f72b775e0f60fbb28 00:23:25.495 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8jI 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 373adff7eb0a0c5f72b775e0f60fbb28 0 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 373adff7eb0a0c5f72b775e0f60fbb28 0 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=373adff7eb0a0c5f72b775e0f60fbb28 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8jI 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8jI 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8jI 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7e5c1704811615f3b1eef9660db5204f4f3d2dd00f95edd8c965a4ed1eea30e1 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oBc 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7e5c1704811615f3b1eef9660db5204f4f3d2dd00f95edd8c965a4ed1eea30e1 3 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7e5c1704811615f3b1eef9660db5204f4f3d2dd00f95edd8c965a4ed1eea30e1 3 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7e5c1704811615f3b1eef9660db5204f4f3d2dd00f95edd8c965a4ed1eea30e1 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oBc 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oBc 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oBc 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3868939 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3868939 ']' 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.496 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h6V 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.b5T ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b5T 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QO6 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wTE ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wTE 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vQC 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XWO ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XWO 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lDY 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8jI ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8jI 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oBc 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.755 23:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:25.756 23:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:26.689 Waiting for block devices as requested 00:23:26.689 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:26.689 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:26.947 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:26.947 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:26.947 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:27.206 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:27.206 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:27.206 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:27.206 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:27.464 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:27.464 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:27.722 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:27.722 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:27.722 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:27.722 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:27.980 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:27.980 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:28.238 23:50:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:28.498 No valid GPT data, bailing 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:28.498 00:23:28.498 Discovery Log Number of Records 2, Generation counter 2 00:23:28.498 =====Discovery Log Entry 0====== 00:23:28.498 trtype: tcp 00:23:28.498 adrfam: ipv4 00:23:28.498 subtype: current discovery subsystem 00:23:28.498 treq: not specified, sq flow control disable supported 00:23:28.498 portid: 1 00:23:28.498 trsvcid: 4420 00:23:28.498 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:28.498 traddr: 10.0.0.1 00:23:28.498 eflags: none 00:23:28.498 sectype: none 00:23:28.498 =====Discovery Log Entry 1====== 00:23:28.498 trtype: tcp 00:23:28.498 adrfam: ipv4 00:23:28.498 subtype: nvme subsystem 00:23:28.498 treq: not specified, sq flow control disable supported 00:23:28.498 portid: 1 00:23:28.498 trsvcid: 4420 00:23:28.498 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:28.498 traddr: 10.0.0.1 00:23:28.498 eflags: none 00:23:28.498 sectype: none 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.498 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.499 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.791 nvme0n1 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.791 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.792 nvme0n1 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.792 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.050 23:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.050 nvme0n1 00:23:29.050 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.050 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.050 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.050 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.051 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 nvme0n1 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.309 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 nvme0n1 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:29.567 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.568 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.826 nvme0n1 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.826 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.827 23:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 nvme0n1 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.085 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.086 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.345 nvme0n1 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.345 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.603 nvme0n1 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.603 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.604 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.861 nvme0n1 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.861 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.862 23:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.120 nvme0n1 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.120 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.121 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.379 nvme0n1 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.379 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.380 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.638 nvme0n1 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.638 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.896 23:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.155 nvme0n1 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.155 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 nvme0n1 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.415 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 nvme0n1 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.674 23:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.241 nvme0n1 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.241 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.242 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.806 nvme0n1 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.806 23:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 nvme0n1 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.371 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.938 nvme0n1 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.938 23:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.504 nvme0n1 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.504 23:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.505 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.505 23:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.436 nvme0n1 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.436 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.437 23:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 nvme0n1 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 23:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 nvme0n1 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.302 23:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.232 nvme0n1 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.232 23:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 nvme0n1 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 nvme0n1 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.163 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.420 nvme0n1 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.420 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.421 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.678 nvme0n1 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.678 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.935 nvme0n1 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.935 23:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.192 nvme0n1 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.192 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.450 nvme0n1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.450 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.708 nvme0n1 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.708 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.966 nvme0n1 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.966 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.967 23:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.225 nvme0n1 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.225 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.226 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.484 nvme0n1 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:42.484 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.485 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.743 nvme0n1 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.743 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.744 23:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.002 nvme0n1 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.002 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.260 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.261 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.522 nvme0n1 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.522 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 nvme0n1 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.781 23:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.039 nvme0n1 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.039 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:44.363 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.364 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.622 nvme0n1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.622 23:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.189 nvme0n1 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.189 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.190 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.190 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.190 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.190 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.190 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.755 nvme0n1 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.755 23:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 nvme0n1 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.320 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.321 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.885 nvme0n1 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:46.885 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.886 23:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 nvme0n1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.818 23:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 nvme0n1 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:48.749 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.750 23:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.680 nvme0n1 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.680 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.681 23:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.613 nvme0n1 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.613 23:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 nvme0n1 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:51.545 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.546 nvme0n1 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.546 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.805 nvme0n1 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.805 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.063 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.064 23:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 nvme0n1 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.064 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.322 nvme0n1 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.322 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.581 nvme0n1 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.581 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.840 nvme0n1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.840 23:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.098 nvme0n1 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.098 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.356 nvme0n1 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.356 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.614 nvme0n1 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.614 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.615 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.873 nvme0n1 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.873 23:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.131 nvme0n1 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:54.131 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.132 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.390 nvme0n1 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.390 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.648 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.907 nvme0n1 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.907 23:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.165 nvme0n1 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.165 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.423 nvme0n1 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.423 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.681 23:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.938 nvme0n1 00:23:55.938 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.196 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.761 nvme0n1 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.761 23:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 nvme0n1 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.328 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.895 nvme0n1 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.895 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.896 23:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.461 nvme0n1 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjY2FmOWU2MGUwNmRkYjUxNWM4NjY5Y2NhYmQ4MDG/Yf1q: 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: ]] 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmIwOTgwNmFiZDJlMThhYWU5YTliODYxYTBiOTk4ZGQwNmU4ZjhiYTNhZWQ2ZTU5M2I1MWM3MWQ0Yjc0ODk5MCfv9RA=: 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.461 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.462 23:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 nvme0n1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.394 23:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.328 nvme0n1 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRlMjFlOTNlNjdkYjEzNGIwYzE1ZmJhNTAyZmQ5NGIRLXjZ: 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzUxMjA5MzNlMDZhNjA5NDhkMzA5M2JhMDU1YzMzZGK2T/Oo: 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.328 23:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.309 nvme0n1 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzgzZDk3MDg4MjQ3YzgxZjA1YzBkNzEyMjc1Y2FjMTI2ZDhkMjg5NDkyOTRmNjBlfGQojw==: 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzczYWRmZjdlYjBhMGM1ZjcyYjc3NWUwZjYwZmJiMjiGktp0: 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.309 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.310 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.874 nvme0n1 00:24:01.874 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.131 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.131 23:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.131 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.131 23:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2U1YzE3MDQ4MTE2MTVmM2IxZWVmOTY2MGRiNTIwNGY0ZjNkMmRkMDBmOTVlZGQ4Yzk2NWE0ZWQxZWVhMzBlMcp+cwM=: 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.131 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.061 nvme0n1 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.061 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTEyNDgyMTQyM2NhMTgwYWU4OWI2YmZhOGIwNDBmMTM4Y2ViYzk4ZmU2Yjc4ZDk0anoPCA==: 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: ]] 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWZiYTc3YWI1YTFhZjliYWMyYjI0MzljNTA3ZmJhYjg5ODhiMjg3OWY2MmM2MmEwI7urZQ==: 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.062 23:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 request: 00:24:03.062 { 00:24:03.062 "name": "nvme0", 00:24:03.062 "trtype": "tcp", 00:24:03.062 "traddr": "10.0.0.1", 00:24:03.062 "adrfam": "ipv4", 00:24:03.062 "trsvcid": "4420", 00:24:03.062 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:03.062 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:03.062 "prchk_reftag": false, 00:24:03.062 "prchk_guard": false, 00:24:03.062 "hdgst": false, 00:24:03.062 "ddgst": false, 00:24:03.062 "method": "bdev_nvme_attach_controller", 00:24:03.062 "req_id": 1 00:24:03.062 } 00:24:03.062 Got JSON-RPC error response 00:24:03.062 response: 00:24:03.062 { 00:24:03.062 "code": -5, 00:24:03.062 "message": "Input/output error" 00:24:03.062 } 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 request: 00:24:03.062 { 00:24:03.062 "name": "nvme0", 00:24:03.062 "trtype": "tcp", 00:24:03.062 "traddr": "10.0.0.1", 00:24:03.062 "adrfam": "ipv4", 00:24:03.062 "trsvcid": "4420", 00:24:03.062 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:03.062 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:03.062 "prchk_reftag": false, 00:24:03.062 "prchk_guard": false, 00:24:03.062 "hdgst": false, 00:24:03.062 "ddgst": false, 00:24:03.062 "dhchap_key": "key2", 00:24:03.062 "method": "bdev_nvme_attach_controller", 00:24:03.062 "req_id": 1 00:24:03.062 } 00:24:03.062 Got JSON-RPC error response 00:24:03.062 response: 00:24:03.062 { 00:24:03.062 "code": -5, 00:24:03.062 "message": "Input/output error" 00:24:03.062 } 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.319 request: 00:24:03.319 { 00:24:03.319 "name": "nvme0", 00:24:03.319 "trtype": "tcp", 00:24:03.319 "traddr": "10.0.0.1", 00:24:03.319 "adrfam": "ipv4", 00:24:03.319 "trsvcid": "4420", 00:24:03.319 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:03.319 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:03.319 "prchk_reftag": false, 00:24:03.319 "prchk_guard": false, 00:24:03.319 "hdgst": false, 00:24:03.319 "ddgst": false, 00:24:03.319 "dhchap_key": "key1", 00:24:03.319 "dhchap_ctrlr_key": "ckey2", 00:24:03.319 "method": "bdev_nvme_attach_controller", 00:24:03.319 "req_id": 1 00:24:03.319 } 00:24:03.319 Got JSON-RPC error response 00:24:03.319 response: 00:24:03.319 { 00:24:03.319 "code": -5, 00:24:03.319 "message": "Input/output error" 00:24:03.319 } 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.319 rmmod nvme_tcp 00:24:03.319 rmmod nvme_fabrics 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3868939 ']' 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3868939 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3868939 ']' 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3868939 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3868939 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3868939' 00:24:03.319 killing process with pid 3868939 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3868939 00:24:03.319 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3868939 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.576 23:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:06.107 23:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:07.041 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:07.041 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:07.041 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:07.973 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:08.233 23:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h6V /tmp/spdk.key-null.QO6 /tmp/spdk.key-sha256.vQC /tmp/spdk.key-sha384.lDY /tmp/spdk.key-sha512.oBc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:08.233 23:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:09.608 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:09.608 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:09.608 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:09.608 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:09.608 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:09.608 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:09.608 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:09.608 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:09.608 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:09.608 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:09.608 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:09.608 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:09.608 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:09.608 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:09.608 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:09.608 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:09.608 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:09.608 00:24:09.608 real 0m47.221s 00:24:09.608 user 0m44.508s 00:24:09.608 sys 0m5.838s 00:24:09.608 23:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.608 23:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.608 ************************************ 00:24:09.608 END TEST nvmf_auth_host 00:24:09.608 ************************************ 00:24:09.608 23:50:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:09.608 23:50:44 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:09.608 23:50:44 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:09.608 23:50:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:09.608 23:50:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.608 23:50:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.608 ************************************ 00:24:09.608 START TEST nvmf_digest 00:24:09.608 ************************************ 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:09.608 * Looking for test storage... 00:24:09.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.608 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.609 23:50:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:12.154 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:12.154 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:12.154 Found net devices under 0000:09:00.0: cvl_0_0 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:12.154 Found net devices under 0000:09:00.1: cvl_0_1 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.154 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:24:12.154 00:24:12.154 --- 10.0.0.2 ping statistics --- 00:24:12.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.155 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:24:12.155 00:24:12.155 --- 10.0.0.1 ping statistics --- 00:24:12.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.155 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:12.155 ************************************ 00:24:12.155 START TEST nvmf_digest_clean 00:24:12.155 ************************************ 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3878116 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3878116 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3878116 ']' 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.155 23:50:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.155 [2024-07-15 23:50:46.956296] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:12.155 [2024-07-15 23:50:46.956400] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.155 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.155 [2024-07-15 23:50:47.020177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.155 [2024-07-15 23:50:47.128412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.155 [2024-07-15 23:50:47.128491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.155 [2024-07-15 23:50:47.128504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.155 [2024-07-15 23:50:47.128514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.155 [2024-07-15 23:50:47.128523] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.155 [2024-07-15 23:50:47.128563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.155 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.413 null0 00:24:12.413 [2024-07-15 23:50:47.302970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.413 [2024-07-15 23:50:47.327174] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3878144 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3878144 /var/tmp/bperf.sock 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3878144 ']' 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.413 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.413 [2024-07-15 23:50:47.371649] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:12.413 [2024-07-15 23:50:47.371726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878144 ] 00:24:12.413 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.413 [2024-07-15 23:50:47.430398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.413 [2024-07-15 23:50:47.536704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.671 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.671 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:12.671 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:12.671 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:12.671 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:12.929 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.929 23:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.187 nvme0n1 00:24:13.187 23:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:13.187 23:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.445 Running I/O for 2 seconds... 00:24:15.346 00:24:15.346 Latency(us) 00:24:15.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.346 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:15.346 nvme0n1 : 2.00 19254.65 75.21 0.00 0.00 6638.99 3349.62 14466.47 00:24:15.346 =================================================================================================================== 00:24:15.346 Total : 19254.65 75.21 0.00 0.00 6638.99 3349.62 14466.47 00:24:15.346 0 00:24:15.346 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:15.346 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:15.346 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:15.346 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:15.346 | select(.opcode=="crc32c") 00:24:15.346 | "\(.module_name) \(.executed)"' 00:24:15.346 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3878144 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3878144 ']' 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3878144 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878144 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878144' 00:24:15.605 killing process with pid 3878144 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3878144 00:24:15.605 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.605 00:24:15.605 Latency(us) 00:24:15.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.605 =================================================================================================================== 00:24:15.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.605 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3878144 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3878552 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3878552 /var/tmp/bperf.sock 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3878552 ']' 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:15.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.863 23:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:15.863 [2024-07-15 23:50:50.963405] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:15.863 [2024-07-15 23:50:50.963478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878552 ] 00:24:15.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:15.863 Zero copy mechanism will not be used. 00:24:16.121 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.121 [2024-07-15 23:50:51.020465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.121 [2024-07-15 23:50:51.125076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.121 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.121 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:16.121 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:16.121 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:16.121 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:16.380 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.380 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.946 nvme0n1 00:24:16.946 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:16.946 23:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:16.946 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:16.946 Zero copy mechanism will not be used. 00:24:16.946 Running I/O for 2 seconds... 00:24:19.476 00:24:19.477 Latency(us) 00:24:19.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.477 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:19.477 nvme0n1 : 2.00 5665.73 708.22 0.00 0.00 2819.78 743.35 4247.70 00:24:19.477 =================================================================================================================== 00:24:19.477 Total : 5665.73 708.22 0.00 0.00 2819.78 743.35 4247.70 00:24:19.477 0 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:19.477 | select(.opcode=="crc32c") 00:24:19.477 | "\(.module_name) \(.executed)"' 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3878552 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3878552 ']' 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3878552 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878552 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878552' 00:24:19.477 killing process with pid 3878552 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3878552 00:24:19.477 Received shutdown signal, test time was about 2.000000 seconds 00:24:19.477 00:24:19.477 Latency(us) 00:24:19.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.477 =================================================================================================================== 00:24:19.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3878552 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3878956 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3878956 /var/tmp/bperf.sock 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3878956 ']' 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.477 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:19.736 [2024-07-15 23:50:54.638879] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:19.736 [2024-07-15 23:50:54.638986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878956 ] 00:24:19.736 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.736 [2024-07-15 23:50:54.699685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.736 [2024-07-15 23:50:54.805742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.736 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.736 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:19.736 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:19.736 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:19.736 23:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:20.302 23:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.302 23:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.588 nvme0n1 00:24:20.588 23:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:20.588 23:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.588 Running I/O for 2 seconds... 00:24:23.113 00:24:23.113 Latency(us) 00:24:23.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.113 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:23.113 nvme0n1 : 2.01 22182.53 86.65 0.00 0.00 5760.43 2560.76 9466.31 00:24:23.113 =================================================================================================================== 00:24:23.113 Total : 22182.53 86.65 0.00 0.00 5760.43 2560.76 9466.31 00:24:23.113 0 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:23.113 | select(.opcode=="crc32c") 00:24:23.113 | "\(.module_name) \(.executed)"' 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:23.113 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3878956 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3878956 ']' 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3878956 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878956 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878956' 00:24:23.114 killing process with pid 3878956 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3878956 00:24:23.114 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.114 00:24:23.114 Latency(us) 00:24:23.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.114 =================================================================================================================== 00:24:23.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.114 23:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3878956 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3879461 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3879461 /var/tmp/bperf.sock 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3879461 ']' 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.371 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.371 [2024-07-15 23:50:58.298840] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:23.371 [2024-07-15 23:50:58.298914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879461 ] 00:24:23.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.371 Zero copy mechanism will not be used. 00:24:23.371 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.371 [2024-07-15 23:50:58.355563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.371 [2024-07-15 23:50:58.459598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.629 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.629 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:23.629 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:23.629 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:23.629 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:23.887 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.887 23:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.452 nvme0n1 00:24:24.453 23:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:24.453 23:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.453 Zero copy mechanism will not be used. 00:24:24.453 Running I/O for 2 seconds... 00:24:26.353 00:24:26.353 Latency(us) 00:24:26.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.353 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:26.353 nvme0n1 : 2.00 5843.22 730.40 0.00 0.00 2731.00 1784.04 6505.05 00:24:26.353 =================================================================================================================== 00:24:26.353 Total : 5843.22 730.40 0.00 0.00 2731.00 1784.04 6505.05 00:24:26.353 0 00:24:26.353 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:26.353 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:26.353 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:26.353 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:26.353 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:26.353 | select(.opcode=="crc32c") 00:24:26.353 | "\(.module_name) \(.executed)"' 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3879461 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3879461 ']' 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3879461 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3879461 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:26.611 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:26.612 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3879461' 00:24:26.612 killing process with pid 3879461 00:24:26.612 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3879461 00:24:26.612 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.612 00:24:26.612 Latency(us) 00:24:26.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.612 =================================================================================================================== 00:24:26.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.612 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3879461 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3878116 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3878116 ']' 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3878116 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878116 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878116' 00:24:26.870 killing process with pid 3878116 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3878116 00:24:26.870 23:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3878116 00:24:27.128 00:24:27.128 real 0m15.336s 00:24:27.128 user 0m30.477s 00:24:27.128 sys 0m4.018s 00:24:27.128 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.128 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:27.128 ************************************ 00:24:27.128 END TEST nvmf_digest_clean 00:24:27.128 ************************************ 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:27.387 ************************************ 00:24:27.387 START TEST nvmf_digest_error 00:24:27.387 ************************************ 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3880036 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3880036 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3880036 ']' 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.387 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.387 [2024-07-15 23:51:02.351778] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:27.387 [2024-07-15 23:51:02.351863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.387 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.387 [2024-07-15 23:51:02.418518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.645 [2024-07-15 23:51:02.521433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.645 [2024-07-15 23:51:02.521497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.645 [2024-07-15 23:51:02.521510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.645 [2024-07-15 23:51:02.521521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.646 [2024-07-15 23:51:02.521529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.646 [2024-07-15 23:51:02.521574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.646 [2024-07-15 23:51:02.582082] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.646 null0 00:24:27.646 [2024-07-15 23:51:02.686026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.646 [2024-07-15 23:51:02.710205] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3880121 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3880121 /var/tmp/bperf.sock 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3880121 ']' 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:27.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.646 23:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.646 [2024-07-15 23:51:02.758311] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:27.646 [2024-07-15 23:51:02.758399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880121 ] 00:24:27.904 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.904 [2024-07-15 23:51:02.817413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.904 [2024-07-15 23:51:02.922279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.904 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.904 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:27.904 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:27.904 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.162 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:28.162 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.162 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.419 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.419 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.419 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.677 nvme0n1 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:28.677 23:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.936 Running I/O for 2 seconds... 00:24:28.936 [2024-07-15 23:51:03.866831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.936 [2024-07-15 23:51:03.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.936 [2024-07-15 23:51:03.866916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.936 [2024-07-15 23:51:03.880597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.936 [2024-07-15 23:51:03.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.936 [2024-07-15 23:51:03.880659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.936 [2024-07-15 23:51:03.892607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.936 [2024-07-15 23:51:03.892653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.936 [2024-07-15 23:51:03.892670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.936 [2024-07-15 23:51:03.905841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.905871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.905902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.918638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.918671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.918704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.932194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.932225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.932259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.943712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.943746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.943778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.957760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.957791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.957822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.970408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.970459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.970485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.981714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.981743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.981774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:03.995232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:03.995279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:03.995296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:04.006734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:04.006763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:04.006794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:04.020217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:04.020269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:04.020287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:04.032615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:04.032645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:04.032676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.937 [2024-07-15 23:51:04.044883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:28.937 [2024-07-15 23:51:04.044911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.937 [2024-07-15 23:51:04.044942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.060558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.060590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.060622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.073216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.073246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.073262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.088441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.088472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.088509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.101290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.101318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.101349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.114462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.114493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.128596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.128648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.128690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.141146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.141177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.156533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.156594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.168920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.168949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.168990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.182510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.182552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.182586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.194079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.194113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.194130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.206838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.206868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.206899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.220694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.220724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.220756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.233739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.233789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.245888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.245932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.245953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.261004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.261036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.261068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.272696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.272739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.197 [2024-07-15 23:51:04.272756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.197 [2024-07-15 23:51:04.285770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.197 [2024-07-15 23:51:04.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.198 [2024-07-15 23:51:04.285830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.198 [2024-07-15 23:51:04.298563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.198 [2024-07-15 23:51:04.298594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.198 [2024-07-15 23:51:04.298627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.198 [2024-07-15 23:51:04.312424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.198 [2024-07-15 23:51:04.312453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.198 [2024-07-15 23:51:04.312489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.456 [2024-07-15 23:51:04.326537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.456 [2024-07-15 23:51:04.326582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.456 [2024-07-15 23:51:04.326598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.456 [2024-07-15 23:51:04.339472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.456 [2024-07-15 23:51:04.339503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.456 [2024-07-15 23:51:04.339541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.456 [2024-07-15 23:51:04.354881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.456 [2024-07-15 23:51:04.354921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.456 [2024-07-15 23:51:04.354973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.366764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.366793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.381333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.381377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.381394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.392896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.392941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.392966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.405792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.405822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.405839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.422374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.422404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.422437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.436588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.436634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.436674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.448828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.448859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.448891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.463514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.463558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.463574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.474954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.474992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.475022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.489579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.489608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.489639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.502233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.502286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.502303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.514545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.514607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.529597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.529634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.529667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.545679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.545710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.545741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.557653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.557683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.557714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.457 [2024-07-15 23:51:04.572903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.457 [2024-07-15 23:51:04.572947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.457 [2024-07-15 23:51:04.572971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.588480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.588510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.588541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.599967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.600011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.600030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.614829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.614858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.614889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.627592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.627621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.627652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.643156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.643186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.643217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.654725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.654756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.654790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.669373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.669403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.669440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.680921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.680983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.681005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.694023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.694054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.694087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.705658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.705701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.705718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.720987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.721065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.737653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.737682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.737714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.747922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.747973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.747990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.763633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.763662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.763692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.781275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.781321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.781337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.795110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.795181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.806814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.806861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.821822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.821852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.821883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.716 [2024-07-15 23:51:04.833246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.716 [2024-07-15 23:51:04.833290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.716 [2024-07-15 23:51:04.833306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.848235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.848266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.861113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.861169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.861197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.874437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.874497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.886705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.886741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.898110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.898140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.898178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.912994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.913034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.913062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.925611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.925650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.925681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.938798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.938829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.938861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.950248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.950318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.963747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.963799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.963827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.977405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.977437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.977471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:04.988535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:04.988567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:04.988585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.002388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.002433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.002452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.014151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.014190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.014223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.029242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.029297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.029314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.043475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.043518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.043535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.056499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.056531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.056563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.070158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.070205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.070223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.082445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.082496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.082515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.975 [2024-07-15 23:51:05.094463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:29.975 [2024-07-15 23:51:05.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.975 [2024-07-15 23:51:05.094511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.107871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.107902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.119754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.119798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.119815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.135151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.135181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.135213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.147728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.147757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.147788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.160573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.160603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.160634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.175205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.234 [2024-07-15 23:51:05.175258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.234 [2024-07-15 23:51:05.175298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.234 [2024-07-15 23:51:05.186882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.186911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.186942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.202303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.202346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.202362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.217018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.217079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.228484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.228513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.228544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.242514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.242543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.242581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.258081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.258111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.258142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.271584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.271623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.271659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.282152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.282181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.282213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.296067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.296101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.296119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.310641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.310681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.310709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.322349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.322392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.335010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.335040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.335071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.235 [2024-07-15 23:51:05.347365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.235 [2024-07-15 23:51:05.347394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.235 [2024-07-15 23:51:05.347424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.362550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.362580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.362610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.377500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.377529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.377560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.389749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.389777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.389809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.403760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.403788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.403820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.415713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.415752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.415794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.430161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.430193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.430227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.441513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.441544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.441576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.456164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.456194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.456228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.467172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.467203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.467241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.481204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.481235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.481273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.493162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.493194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.493227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.506167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.506201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.517568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.517632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.530144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.530173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.530204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.543869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.543899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.543930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.556141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.556173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.556205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.569160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.569190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.569222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.583052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.583090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.583108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.596100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.596140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.596169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.494 [2024-07-15 23:51:05.607554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.494 [2024-07-15 23:51:05.607583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.494 [2024-07-15 23:51:05.607614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.623362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.623407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.623423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.636153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.636185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.636217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.648361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.648400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.648441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.662001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.662032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.662065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.674376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.674439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.689481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.689512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.689544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.704016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.704046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.704078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.716175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.716230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.716259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.728531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.728596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.742566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.742603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.742636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.754328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.754357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.754387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.768009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.768038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.768054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.779862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.779891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.795699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.795738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.795769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.754 [2024-07-15 23:51:05.810142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.754 [2024-07-15 23:51:05.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.754 [2024-07-15 23:51:05.810227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.755 [2024-07-15 23:51:05.822458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.755 [2024-07-15 23:51:05.822488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.755 [2024-07-15 23:51:05.822519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.755 [2024-07-15 23:51:05.834849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.755 [2024-07-15 23:51:05.834880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.755 [2024-07-15 23:51:05.834898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.755 [2024-07-15 23:51:05.846544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bb2d50) 00:24:30.755 [2024-07-15 23:51:05.846573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.755 [2024-07-15 23:51:05.846604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.755 00:24:30.755 Latency(us) 00:24:30.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:30.755 nvme0n1 : 2.00 19184.23 74.94 0.00 0.00 6663.09 3762.25 18835.53 00:24:30.755 =================================================================================================================== 00:24:30.755 Total : 19184.23 74.94 0.00 0.00 6663.09 3762.25 18835.53 00:24:30.755 0 00:24:30.755 23:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:30.755 23:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:30.755 23:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:30.755 23:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:30.755 | .driver_specific 00:24:30.755 | .nvme_error 00:24:30.755 | .status_code 00:24:30.755 | .command_transient_transport_error' 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3880121 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3880121 ']' 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3880121 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880121 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880121' 00:24:31.323 killing process with pid 3880121 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3880121 00:24:31.323 Received shutdown signal, test time was about 2.000000 seconds 00:24:31.323 00:24:31.323 Latency(us) 00:24:31.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.323 =================================================================================================================== 00:24:31.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3880121 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3880587 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3880587 /var/tmp/bperf.sock 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3880587 ']' 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:31.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.323 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.581 [2024-07-15 23:51:06.455846] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:31.582 [2024-07-15 23:51:06.455922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880587 ] 00:24:31.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:31.582 Zero copy mechanism will not be used. 00:24:31.582 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.582 [2024-07-15 23:51:06.515040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.582 [2024-07-15 23:51:06.622929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.840 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.840 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:31.840 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:31.840 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:32.098 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:32.098 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.098 23:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.098 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.098 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:32.098 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:32.357 nvme0n1 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:32.357 23:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:32.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:32.357 Zero copy mechanism will not be used. 00:24:32.357 Running I/O for 2 seconds... 00:24:32.357 [2024-07-15 23:51:07.466425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.357 [2024-07-15 23:51:07.466477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.357 [2024-07-15 23:51:07.466498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.357 [2024-07-15 23:51:07.473210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.357 [2024-07-15 23:51:07.473263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.357 [2024-07-15 23:51:07.473282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.357 [2024-07-15 23:51:07.481141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.357 [2024-07-15 23:51:07.481174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.357 [2024-07-15 23:51:07.481192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.488624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.488654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.488671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.496563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.496593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.496630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.504562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.504605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.504622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.512232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.512276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.512293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.520710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.520739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.520755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.528644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.528705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.536505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.536550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.536566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.543461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.543493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.543511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.550338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.550370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.550388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.556968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.556998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.557015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.564376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.564407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.564439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.572375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.572420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.579694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.579724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.586837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.586867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.586885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.592718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.592748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.592765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.598614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.598644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.598660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.604743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.604790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.610654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.617 [2024-07-15 23:51:07.610684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.617 [2024-07-15 23:51:07.610701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.617 [2024-07-15 23:51:07.616562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.616608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.622665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.622707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.622724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.628692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.628728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.634789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.634819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.634836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.640815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.640844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.640879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.646986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.647015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.647032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.653048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.653079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.653097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.658525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.658556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.658573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.664201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.664232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.664267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.668613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.668643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.668661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.675243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.675272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.675289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.681445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.681476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.681494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.687122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.687151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.687168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.693074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.693104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.693121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.699007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.699051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.704951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.705012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.705029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.710928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.710978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.710995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.716944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.716981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.717013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.722677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.722706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.722722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.728508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.728536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.728557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.734300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.734327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.734343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.618 [2024-07-15 23:51:07.740024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.618 [2024-07-15 23:51:07.740054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.618 [2024-07-15 23:51:07.740071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.745988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.746032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.746049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.751908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.751952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.751977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.757734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.757761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.757777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.763590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.763633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.763649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.769344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.769372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.769387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.878 [2024-07-15 23:51:07.775167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.878 [2024-07-15 23:51:07.775195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.878 [2024-07-15 23:51:07.775212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.780926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.780954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.781004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.786712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.786755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.786770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.792689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.792717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.792733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.798473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.798501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.798517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.804180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.804210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.804227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.810154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.810202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.816141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.816187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.821816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.821846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.821863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.827948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.827990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.828021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.833906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.833936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.833977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.839828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.839873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.839890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.845779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.845809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.845841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.851528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.851558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.851574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.857358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.857388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.857404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.863212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.863245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.863278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.868905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.868935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.868951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.874813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.874856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.874873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.880655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.880702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.880718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.886732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.886760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.886776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.892695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.892724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.892757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.898597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.898626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.898643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.904387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.904417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.904434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.910228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.910272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.910288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.916147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.916194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.921973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.922001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.922018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.927729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.927758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.927775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.933559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.879 [2024-07-15 23:51:07.933589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.879 [2024-07-15 23:51:07.933607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.879 [2024-07-15 23:51:07.939606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.939635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.939651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.945445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.945474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.945510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.951327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.951356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.957328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.957357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.957390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.963274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.963304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.963332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.969075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.969103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.974930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.974967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.975000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.980889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.980919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.980965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.986832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.986861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.986878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.992687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.880 [2024-07-15 23:51:07.998633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:32.880 [2024-07-15 23:51:07.998661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.880 [2024-07-15 23:51:07.998697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.004748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.004777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.004809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.011002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.011032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.011064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.016870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.016898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.023019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.023048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.028913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.028953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.028980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.034749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.034799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.034817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.040143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.040191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.045951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.045988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.046005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.051738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.051767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.051784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.057696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.057726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.057743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.139 [2024-07-15 23:51:08.063629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.139 [2024-07-15 23:51:08.063659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.139 [2024-07-15 23:51:08.063676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.069534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.069564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.069581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.075637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.075699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.081556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.081586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.081609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.087493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.087523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.087540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.093313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.093353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.093370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.099016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.099045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.099062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.104804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.104833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.104850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.110604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.110633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.110649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.116578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.116608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.122634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.122663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.122695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.128705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.128749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.128766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.134793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.134827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.134860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.140740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.140784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.140801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.146683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.146712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.146729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.152658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.152687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.152704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.158539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.158568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.158584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.164726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.164755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.164772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.170539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.170568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.176567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.176610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.182572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.182600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.182617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.188659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.188687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.188719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.194932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.194983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.195000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.200948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.200983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.201015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.206891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.206919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.206951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.212729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.212758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.212774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.218719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.218748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.218765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.224508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.224537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.224553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.230471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.230501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.140 [2024-07-15 23:51:08.236436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.140 [2024-07-15 23:51:08.236465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.140 [2024-07-15 23:51:08.236488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.141 [2024-07-15 23:51:08.242532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.141 [2024-07-15 23:51:08.242562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.141 [2024-07-15 23:51:08.242579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.141 [2024-07-15 23:51:08.248591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.141 [2024-07-15 23:51:08.248635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.141 [2024-07-15 23:51:08.248652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.141 [2024-07-15 23:51:08.255073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.141 [2024-07-15 23:51:08.255103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.141 [2024-07-15 23:51:08.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.141 [2024-07-15 23:51:08.260683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.141 [2024-07-15 23:51:08.260713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.141 [2024-07-15 23:51:08.260730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.266953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.266990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.267007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.273082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.273113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.273130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.278885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.278914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.278946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.282185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.282214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.282232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.288901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.288939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.288993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.295765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.295794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.400 [2024-07-15 23:51:08.295833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.400 [2024-07-15 23:51:08.303417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.400 [2024-07-15 23:51:08.303448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.303480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.311251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.311305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.311323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.319451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.319496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.319513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.327423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.327454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.327487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.335091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.335121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.343011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.343042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.343059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.351004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.351035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.351053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.359098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.359128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.367211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.367259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.367279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.374940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.374987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.375019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.382981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.383012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.383030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.391008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.391037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.398907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.398962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.398983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.406973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.407036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.414912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.414966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.414985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.421289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.421319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.421358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.427872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.427901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.427933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.434478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.434508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.434525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.441177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.441220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.441237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.401 [2024-07-15 23:51:08.447819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.401 [2024-07-15 23:51:08.447850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.401 [2024-07-15 23:51:08.447867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.455376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.455424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.463043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.463087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.463105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.470887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.470918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.470937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.478767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.478799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.478817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.485599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.485631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.485649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.490089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.490119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.497665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.497694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.505283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.505327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.505344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.511917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.511944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.511985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.402 [2024-07-15 23:51:08.519267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.402 [2024-07-15 23:51:08.519298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.402 [2024-07-15 23:51:08.519316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.525970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.526000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.526017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.532737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.532768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.532786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.539411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.539457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.539481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.546316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.546361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.546378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.552786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.552849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.559349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.559379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.559410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.565860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.565892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.565909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.572511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.572555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.579120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.579164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.579180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.586439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.586470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.586488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.593677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.661 [2024-07-15 23:51:08.593708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.661 [2024-07-15 23:51:08.593741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.661 [2024-07-15 23:51:08.602061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.602099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.602117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.609263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.609316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.609333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.617226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.617257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.625352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.625383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.625401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.633217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.633272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.633289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.641216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.641268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.641285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.649161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.649192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.649209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.656874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.656904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.663839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.663870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.663887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.668298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.668342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.668358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.676181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.676211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.676228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.683987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.684030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.684049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.691619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.691648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.691680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.698806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.698863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.698880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.706116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.706147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.706165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.714208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.714254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.714278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.721976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.722020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.722038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.729477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.729507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.729531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.736872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.736903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.736921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.744770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.744800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.744818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.753150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.753181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.753198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.760064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.760095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.760113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.767896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.767928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.767951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.772989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.773020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.773039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.662 [2024-07-15 23:51:08.778613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.662 [2024-07-15 23:51:08.778644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.662 [2024-07-15 23:51:08.778677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.786101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.786149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.786166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.793721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.793758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.793777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.801815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.801860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.801875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.810526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.817973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.818003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.818020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.824997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.825030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.825048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.832063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.832094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.832127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.839867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.839913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.848509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.848541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.848558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.856828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.856878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.865566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.865613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.865631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.874175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.874206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.883163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.883195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.921 [2024-07-15 23:51:08.883213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.921 [2024-07-15 23:51:08.891824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.921 [2024-07-15 23:51:08.891855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.891873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.899841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.899873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.899907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.908313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.908344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.915716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.915748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.915765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.920279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.920309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.920326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.928082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.928113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.928136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.935860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.935888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.935904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.943465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.943495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.943529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.951314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.951343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.951374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.959016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.959047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.959064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.966453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.966484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.966502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.974315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.974362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.982347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.982389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.982422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.989159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.989191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.989208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:08.996155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:08.996186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:08.996204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.003402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.003433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.003451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.010472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.010535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.016909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.016939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.016963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.023487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.023518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.023535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.029241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.029270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.029288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.034745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.034774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.034791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:33.922 [2024-07-15 23:51:09.040595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:33.922 [2024-07-15 23:51:09.040625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.922 [2024-07-15 23:51:09.040641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.046323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.046353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.046375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.052100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.052129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.052146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.057801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.057831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.057848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.063695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.063725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.069520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.069550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.069566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.075605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.075634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.075651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.081651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.081700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.087576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.087622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.093526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.093557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.093574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.099226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.099261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.099280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.105156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.105186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.105203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.110773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.110802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.110819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.116616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.116646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.116663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.122549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.122593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.122609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.128510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.128540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.128557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.134336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.134380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.134397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.140237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.140292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.146074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.146104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.146120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.151850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.182 [2024-07-15 23:51:09.151880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.182 [2024-07-15 23:51:09.151897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.182 [2024-07-15 23:51:09.157683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.157712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.157730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.163777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.163807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.163824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.169532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.175353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.175382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.175399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.181313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.181342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.181359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.187571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.187602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.187619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.193523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.193568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.193585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.199487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.199519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.199545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.205733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.205764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.205782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.211548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.211578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.211595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.217560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.217591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.217608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.223274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.223303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.223320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.229043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.229072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.229089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.234766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.234795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.240643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.240673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.240690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.246735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.246765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.246782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.252608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.252644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.252661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.258474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.258504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.264644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.264690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.270659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.270688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.270705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.276469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.276498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.276515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.282353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.287965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.287993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.288010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.293691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.293720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.293736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.183 [2024-07-15 23:51:09.299439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.183 [2024-07-15 23:51:09.299469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.183 [2024-07-15 23:51:09.299485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.305663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.305693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.305709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.311486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.311515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.311532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.317495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.317540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.317557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.323475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.323536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.329656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.329686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.329703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.335635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.335664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.335681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.341711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.341741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.341758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.347750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.347780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.347811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.353770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.353803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.353821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.359687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.359716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.359733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.365525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.365554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.365571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.371411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.371440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.371456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.377306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.377335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.377352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.383061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.383091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.383108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.388820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.388849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.388865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.394536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.394566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.394583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.400545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.400576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.400593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.406476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.406507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.406524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.412456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.412486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.412504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.418617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.418647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.418665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.424625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.424654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.424686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.430355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.430384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.430401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.436553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.436582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.436599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.442368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.442398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.442415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.448344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.448373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.448390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.443 [2024-07-15 23:51:09.454452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.443 [2024-07-15 23:51:09.454482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.443 [2024-07-15 23:51:09.454503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.444 [2024-07-15 23:51:09.460471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.444 [2024-07-15 23:51:09.460501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.444 [2024-07-15 23:51:09.460519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.444 [2024-07-15 23:51:09.466356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x232b4f0) 00:24:34.444 [2024-07-15 23:51:09.466385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.444 [2024-07-15 23:51:09.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.444 00:24:34.444 Latency(us) 00:24:34.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.444 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:34.444 nvme0n1 : 2.00 4803.66 600.46 0.00 0.00 3325.79 743.35 9272.13 00:24:34.444 =================================================================================================================== 00:24:34.444 Total : 4803.66 600.46 0.00 0.00 3325.79 743.35 9272.13 00:24:34.444 0 00:24:34.444 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:34.444 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:34.444 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:34.444 | .driver_specific 00:24:34.444 | .nvme_error 00:24:34.444 | .status_code 00:24:34.444 | .command_transient_transport_error' 00:24:34.444 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 310 > 0 )) 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3880587 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3880587 ']' 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3880587 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880587 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880587' 00:24:34.702 killing process with pid 3880587 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3880587 00:24:34.702 Received shutdown signal, test time was about 2.000000 seconds 00:24:34.702 00:24:34.702 Latency(us) 00:24:34.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.702 =================================================================================================================== 00:24:34.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.702 23:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3880587 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3881503 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3881503 /var/tmp/bperf.sock 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3881503 ']' 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.961 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.961 [2024-07-15 23:51:10.061377] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:34.961 [2024-07-15 23:51:10.061464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881503 ] 00:24:35.220 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.220 [2024-07-15 23:51:10.120721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.220 [2024-07-15 23:51:10.229605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.220 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.220 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:35.220 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.220 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.478 23:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.043 nvme0n1 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:36.043 23:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.332 Running I/O for 2 seconds... 00:24:36.332 [2024-07-15 23:51:11.207799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ee5c8 00:24:36.332 [2024-07-15 23:51:11.208698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.332 [2024-07-15 23:51:11.208746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.332 [2024-07-15 23:51:11.219070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fac10 00:24:36.332 [2024-07-15 23:51:11.219908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.332 [2024-07-15 23:51:11.219951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:36.332 [2024-07-15 23:51:11.231410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eaef0 00:24:36.332 [2024-07-15 23:51:11.232411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.332 [2024-07-15 23:51:11.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.243713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1b48 00:24:36.333 [2024-07-15 23:51:11.244855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.244898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.256148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e84c0 00:24:36.333 [2024-07-15 23:51:11.257484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.257526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.268403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fac10 00:24:36.333 [2024-07-15 23:51:11.269855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.269898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.280657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1430 00:24:36.333 [2024-07-15 23:51:11.282264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.282291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.291889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e3498 00:24:36.333 [2024-07-15 23:51:11.293240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.293288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.301500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f8e88 00:24:36.333 [2024-07-15 23:51:11.302217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.302260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.313593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fbcf0 00:24:36.333 [2024-07-15 23:51:11.314456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.314497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.324777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eaef0 00:24:36.333 [2024-07-15 23:51:11.325637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.325678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.337836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eb760 00:24:36.333 [2024-07-15 23:51:11.338900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.338942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.350077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e9e10 00:24:36.333 [2024-07-15 23:51:11.351260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.351301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.361052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f46d0 00:24:36.333 [2024-07-15 23:51:11.362206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.362249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.373579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ee190 00:24:36.333 [2024-07-15 23:51:11.374853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.385868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fe720 00:24:36.333 [2024-07-15 23:51:11.387375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.387418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.396830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190dfdc0 00:24:36.333 [2024-07-15 23:51:11.397904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.397934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.408675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e5220 00:24:36.333 [2024-07-15 23:51:11.409703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.409734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.419837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e49b0 00:24:36.333 [2024-07-15 23:51:11.421508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.421537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:36.333 [2024-07-15 23:51:11.430021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e7c50 00:24:36.333 [2024-07-15 23:51:11.430833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.333 [2024-07-15 23:51:11.430861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:36.598 [2024-07-15 23:51:11.442217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f8618 00:24:36.598 [2024-07-15 23:51:11.443109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.598 [2024-07-15 23:51:11.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:36.598 [2024-07-15 23:51:11.455332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fe2e8 00:24:36.598 [2024-07-15 23:51:11.456442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.598 [2024-07-15 23:51:11.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.598 [2024-07-15 23:51:11.467719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190dfdc0 00:24:36.598 [2024-07-15 23:51:11.468953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.598 [2024-07-15 23:51:11.468987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.480056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f20d8 00:24:36.599 [2024-07-15 23:51:11.481420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.481461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.491286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eee38 00:24:36.599 [2024-07-15 23:51:11.492587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.492629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.502226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e5220 00:24:36.599 [2024-07-15 23:51:11.503145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.503174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.514070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e6738 00:24:36.599 [2024-07-15 23:51:11.514873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.514917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.525989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e84c0 00:24:36.599 [2024-07-15 23:51:11.527071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.527113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.538152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1b48 00:24:36.599 [2024-07-15 23:51:11.539290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.539331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.550035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e8088 00:24:36.599 [2024-07-15 23:51:11.551248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.551276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.562088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e8d30 00:24:36.599 [2024-07-15 23:51:11.563387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.563429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.571684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.572424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.572452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.583423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.584147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.584176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.595112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.595826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.595859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.607004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.607723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.607751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.618876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.619620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.619648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.632039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:36.599 [2024-07-15 23:51:11.633308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.633349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.644302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e3d08 00:24:36.599 [2024-07-15 23:51:11.645705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.645748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.655224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6cc8 00:24:36.599 [2024-07-15 23:51:11.656304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.656331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.667066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f0350 00:24:36.599 [2024-07-15 23:51:11.668086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.668116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.678268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fda78 00:24:36.599 [2024-07-15 23:51:11.680033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.680062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.688397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e8088 00:24:36.599 [2024-07-15 23:51:11.689114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.689157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.701383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eee38 00:24:36.599 [2024-07-15 23:51:11.702307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.702350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:36.599 [2024-07-15 23:51:11.713635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f2d80 00:24:36.599 [2024-07-15 23:51:11.714730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.599 [2024-07-15 23:51:11.714775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.725640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1b48 00:24:36.859 [2024-07-15 23:51:11.726874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.726903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.737656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e0630 00:24:36.859 [2024-07-15 23:51:11.738676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.738725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.750013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e8d30 00:24:36.859 [2024-07-15 23:51:11.751171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.751199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.761458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f2510 00:24:36.859 [2024-07-15 23:51:11.762265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.762292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.774780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fef90 00:24:36.859 [2024-07-15 23:51:11.776173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.776216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.785933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190de038 00:24:36.859 [2024-07-15 23:51:11.786951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.786984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.797721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebfd0 00:24:36.859 [2024-07-15 23:51:11.798710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.798739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.809748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e0ea0 00:24:36.859 [2024-07-15 23:51:11.810961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.810988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.823110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f35f0 00:24:36.859 [2024-07-15 23:51:11.824804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.824846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.831467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e27f0 00:24:36.859 [2024-07-15 23:51:11.832223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.832267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.842416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e5ec8 00:24:36.859 [2024-07-15 23:51:11.843096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.843138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.854587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190de470 00:24:36.859 [2024-07-15 23:51:11.855431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.855473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.866877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eb328 00:24:36.859 [2024-07-15 23:51:11.867919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.867968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.879694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e0a68 00:24:36.859 [2024-07-15 23:51:11.880632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.880660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.890854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ef6a8 00:24:36.859 [2024-07-15 23:51:11.892528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.892557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.903201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f2510 00:24:36.859 [2024-07-15 23:51:11.905103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.905138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.913239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ed0b0 00:24:36.859 [2024-07-15 23:51:11.914087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.914129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.925548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1ca0 00:24:36.859 [2024-07-15 23:51:11.926554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.926596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.937668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebb98 00:24:36.859 [2024-07-15 23:51:11.938775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.938817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.949739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eff18 00:24:36.859 [2024-07-15 23:51:11.951057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.951099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.961151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb048 00:24:36.859 [2024-07-15 23:51:11.963063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.859 [2024-07-15 23:51:11.963092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.859 [2024-07-15 23:51:11.971161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fda78 00:24:36.860 [2024-07-15 23:51:11.971974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.860 [2024-07-15 23:51:11.972016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:11.984027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e4140 00:24:37.119 [2024-07-15 23:51:11.985109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:11.985138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:11.997328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ec840 00:24:37.119 [2024-07-15 23:51:11.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:11.998612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.009501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e95a0 00:24:37.119 [2024-07-15 23:51:12.010752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.010794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.020577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190df988 00:24:37.119 [2024-07-15 23:51:12.021814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.021856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.031495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f5be8 00:24:37.119 [2024-07-15 23:51:12.032424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.032465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.043291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f0ff8 00:24:37.119 [2024-07-15 23:51:12.044130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.044159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.056855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fdeb0 00:24:37.119 [2024-07-15 23:51:12.058478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.058520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.069048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6cc8 00:24:37.119 [2024-07-15 23:51:12.070734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.070777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.081157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb048 00:24:37.119 [2024-07-15 23:51:12.083007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.083048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.089436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e12d8 00:24:37.119 [2024-07-15 23:51:12.090253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.090294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.100309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fd640 00:24:37.119 [2024-07-15 23:51:12.101111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.101153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.112498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e9e10 00:24:37.119 [2024-07-15 23:51:12.113461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.113501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.125655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e27f0 00:24:37.119 [2024-07-15 23:51:12.126857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.126899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.137781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e6b70 00:24:37.119 [2024-07-15 23:51:12.139092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.139134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.151081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebb98 00:24:37.119 [2024-07-15 23:51:12.152972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.159444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1430 00:24:37.119 [2024-07-15 23:51:12.160310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.160336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.170388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ec840 00:24:37.119 [2024-07-15 23:51:12.171188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.171229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.182476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e88f8 00:24:37.119 [2024-07-15 23:51:12.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.183471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.195552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ed0b0 00:24:37.119 [2024-07-15 23:51:12.196758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.196799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.207734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb480 00:24:37.119 [2024-07-15 23:51:12.208990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.209022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.218779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb480 00:24:37.119 [2024-07-15 23:51:12.220060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.220103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:37.119 [2024-07-15 23:51:12.230950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fd208 00:24:37.119 [2024-07-15 23:51:12.232387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.119 [2024-07-15 23:51:12.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.243674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e0ea0 00:24:37.378 [2024-07-15 23:51:12.245470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.245499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.255011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e3498 00:24:37.378 [2024-07-15 23:51:12.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.256238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.266846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebfd0 00:24:37.378 [2024-07-15 23:51:12.267998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.268027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.280417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e9168 00:24:37.378 [2024-07-15 23:51:12.282256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.282297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.288789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1868 00:24:37.378 [2024-07-15 23:51:12.289613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.289654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.302119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f92c0 00:24:37.378 [2024-07-15 23:51:12.303496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.303537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.313073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ec840 00:24:37.378 [2024-07-15 23:51:12.314084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.314111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.324962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fd640 00:24:37.378 [2024-07-15 23:51:12.325919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.325947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.337105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e84c0 00:24:37.378 [2024-07-15 23:51:12.338196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.338224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.348108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f7970 00:24:37.378 [2024-07-15 23:51:12.349947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.358165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f31b8 00:24:37.378 [2024-07-15 23:51:12.358942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.358987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.370407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f46d0 00:24:37.378 [2024-07-15 23:51:12.371383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.371424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.382689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fd640 00:24:37.378 [2024-07-15 23:51:12.383773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.395699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fdeb0 00:24:37.378 [2024-07-15 23:51:12.397101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.397145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.407876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f3a28 00:24:37.378 [2024-07-15 23:51:12.409276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.409317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.418753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ddc00 00:24:37.378 [2024-07-15 23:51:12.420134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.420176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.430909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1f80 00:24:37.378 [2024-07-15 23:51:12.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.432498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.443063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e99d8 00:24:37.378 [2024-07-15 23:51:12.444736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.444777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.455100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ff3c8 00:24:37.378 [2024-07-15 23:51:12.456915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.378 [2024-07-15 23:51:12.456964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.378 [2024-07-15 23:51:12.464364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eaef0 00:24:37.378 [2024-07-15 23:51:12.465571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.379 [2024-07-15 23:51:12.465612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:37.379 [2024-07-15 23:51:12.476541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190de470 00:24:37.379 [2024-07-15 23:51:12.477893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.379 [2024-07-15 23:51:12.477934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:37.379 [2024-07-15 23:51:12.488631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eb760 00:24:37.379 [2024-07-15 23:51:12.490186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.379 [2024-07-15 23:51:12.490227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:37.379 [2024-07-15 23:51:12.501161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1868 00:24:37.637 [2024-07-15 23:51:12.503187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.503216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.512674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f8618 00:24:37.637 [2024-07-15 23:51:12.514053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.514101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.523439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebb98 00:24:37.637 [2024-07-15 23:51:12.524647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.524688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.534301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f1430 00:24:37.637 [2024-07-15 23:51:12.535110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.535153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.546172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ef270 00:24:37.637 [2024-07-15 23:51:12.546892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.546935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.559717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190df550 00:24:37.637 [2024-07-15 23:51:12.561290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.561330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.572027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e49b0 00:24:37.637 [2024-07-15 23:51:12.573684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.573725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.584282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eea00 00:24:37.637 [2024-07-15 23:51:12.586118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.592620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e5a90 00:24:37.637 [2024-07-15 23:51:12.593418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.593459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.605905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e9e10 00:24:37.637 [2024-07-15 23:51:12.607272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.607299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.618198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f9b30 00:24:37.637 [2024-07-15 23:51:12.619696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.619737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.637 [2024-07-15 23:51:12.629094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6890 00:24:37.637 [2024-07-15 23:51:12.630227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.637 [2024-07-15 23:51:12.630281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.640832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ebfd0 00:24:37.638 [2024-07-15 23:51:12.641966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.641995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.651805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f0788 00:24:37.638 [2024-07-15 23:51:12.653644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.653673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.661976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fda78 00:24:37.638 [2024-07-15 23:51:12.662735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.662775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.675070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1710 00:24:37.638 [2024-07-15 23:51:12.676092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.687004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fac10 00:24:37.638 [2024-07-15 23:51:12.687989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.688016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.699222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc998 00:24:37.638 [2024-07-15 23:51:12.700385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.700427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.710450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190edd58 00:24:37.638 [2024-07-15 23:51:12.711548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.711589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.722774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fcdd0 00:24:37.638 [2024-07-15 23:51:12.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.724047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.733774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190de470 00:24:37.638 [2024-07-15 23:51:12.734615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.734657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.745763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f7538 00:24:37.638 [2024-07-15 23:51:12.746595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.746639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.638 [2024-07-15 23:51:12.758245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1710 00:24:37.638 [2024-07-15 23:51:12.759113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.638 [2024-07-15 23:51:12.759141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.896 [2024-07-15 23:51:12.770896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e23b8 00:24:37.897 [2024-07-15 23:51:12.771977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.772007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.784370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ec840 00:24:37.897 [2024-07-15 23:51:12.786168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.786210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.792710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb8b8 00:24:37.897 [2024-07-15 23:51:12.793502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.793543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.803802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1b48 00:24:37.897 [2024-07-15 23:51:12.804586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.804627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.816074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f9b30 00:24:37.897 [2024-07-15 23:51:12.816995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.817046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.828523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f7da8 00:24:37.897 [2024-07-15 23:51:12.829582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.829635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.840773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e6b70 00:24:37.897 [2024-07-15 23:51:12.842057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.842100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.852054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e4de8 00:24:37.897 [2024-07-15 23:51:12.852935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.865529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f4298 00:24:37.897 [2024-07-15 23:51:12.867019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.867062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.877877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f2510 00:24:37.897 [2024-07-15 23:51:12.879615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.879657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.890277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f8618 00:24:37.897 [2024-07-15 23:51:12.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.892177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.898634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f0ff8 00:24:37.897 [2024-07-15 23:51:12.899470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.909811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f8e88 00:24:37.897 [2024-07-15 23:51:12.910592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.910634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.922015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e1710 00:24:37.897 [2024-07-15 23:51:12.922910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.922963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.934250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190eee38 00:24:37.897 [2024-07-15 23:51:12.935387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.935429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.945440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6458 00:24:37.897 [2024-07-15 23:51:12.946175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.946217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.958658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e3d08 00:24:37.897 [2024-07-15 23:51:12.959981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.960016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.969723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f92c0 00:24:37.897 [2024-07-15 23:51:12.970706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.970749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.981654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e84c0 00:24:37.897 [2024-07-15 23:51:12.982600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.982630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:12.995193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fb480 00:24:37.897 [2024-07-15 23:51:12.996913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:12.996962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:13.007554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190fc560 00:24:37.897 [2024-07-15 23:51:13.009379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:13.009420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.897 [2024-07-15 23:51:13.016024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f5378 00:24:37.897 [2024-07-15 23:51:13.017042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.897 [2024-07-15 23:51:13.017071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.027616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e88f8 00:24:38.156 [2024-07-15 23:51:13.028440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.028482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.040823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e6fa8 00:24:38.156 [2024-07-15 23:51:13.041816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.041858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.053138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f35f0 00:24:38.156 [2024-07-15 23:51:13.054211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.054254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.064115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e84c0 00:24:38.156 [2024-07-15 23:51:13.065199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.065242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.076351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e7c50 00:24:38.156 [2024-07-15 23:51:13.077583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.077627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.088639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190dfdc0 00:24:38.156 [2024-07-15 23:51:13.089976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.090019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.099578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e27f0 00:24:38.156 [2024-07-15 23:51:13.100600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.100628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.111566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6458 00:24:38.156 [2024-07-15 23:51:13.112483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.156 [2024-07-15 23:51:13.112511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.156 [2024-07-15 23:51:13.123748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190e23b8 00:24:38.156 [2024-07-15 23:51:13.124813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.124841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.137334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6890 00:24:38.157 [2024-07-15 23:51:13.139213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.139256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.145722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190ed920 00:24:38.157 [2024-07-15 23:51:13.146549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.156798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f6020 00:24:38.157 [2024-07-15 23:51:13.157574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.157616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.169029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190efae0 00:24:38.157 [2024-07-15 23:51:13.169980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.170009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.181310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f9b30 00:24:38.157 [2024-07-15 23:51:13.182382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.182424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.157 [2024-07-15 23:51:13.193542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19fc710) with pdu=0x2000190f7970 00:24:38.157 [2024-07-15 23:51:13.194785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.157 [2024-07-15 23:51:13.194828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.157 00:24:38.157 Latency(us) 00:24:38.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.157 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.157 nvme0n1 : 2.01 21697.84 84.76 0.00 0.00 5889.45 2536.49 14175.19 00:24:38.157 =================================================================================================================== 00:24:38.157 Total : 21697.84 84.76 0.00 0.00 5889.45 2536.49 14175.19 00:24:38.157 0 00:24:38.157 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:38.157 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:38.157 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:38.157 | .driver_specific 00:24:38.157 | .nvme_error 00:24:38.157 | .status_code 00:24:38.157 | .command_transient_transport_error' 00:24:38.157 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3881503 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3881503 ']' 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3881503 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3881503 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3881503' 00:24:38.415 killing process with pid 3881503 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3881503 00:24:38.415 Received shutdown signal, test time was about 2.000000 seconds 00:24:38.415 00:24:38.415 Latency(us) 00:24:38.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.415 =================================================================================================================== 00:24:38.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.415 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3881503 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3881913 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3881913 /var/tmp/bperf.sock 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3881913 ']' 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:38.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.674 23:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.932 [2024-07-15 23:51:13.800911] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:38.932 [2024-07-15 23:51:13.801026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881913 ] 00:24:38.932 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:38.932 Zero copy mechanism will not be used. 00:24:38.932 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.932 [2024-07-15 23:51:13.860724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.932 [2024-07-15 23:51:13.970393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.195 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.758 nvme0n1 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:39.758 23:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.758 Zero copy mechanism will not be used. 00:24:39.758 Running I/O for 2 seconds... 00:24:39.758 [2024-07-15 23:51:14.861663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:39.758 [2024-07-15 23:51:14.862069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.758 [2024-07-15 23:51:14.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.758 [2024-07-15 23:51:14.867187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:39.758 [2024-07-15 23:51:14.867490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.758 [2024-07-15 23:51:14.867521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.758 [2024-07-15 23:51:14.872203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:39.758 [2024-07-15 23:51:14.872295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.758 [2024-07-15 23:51:14.872334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.758 [2024-07-15 23:51:14.877863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:39.758 [2024-07-15 23:51:14.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.758 [2024-07-15 23:51:14.878204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.883386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.883754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.883784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.888888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.889226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.889257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.893814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.893899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.893926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.900047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.900383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.900412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.906202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.906522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.906550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.912636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.912978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.918944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.919303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.919332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.925463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.925770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.925799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.930794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.931095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.931124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.936131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.936476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-07-15 23:51:14.936505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.017 [2024-07-15 23:51:14.941522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.017 [2024-07-15 23:51:14.941877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.941906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.947062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.947388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.952616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.952917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.952967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.958314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.958649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.958679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.964942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.965309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.971021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.971318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.971348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.976345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.976653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.976687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.981805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.982138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.982168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.987155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.987513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.992714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.993178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.993206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:14.998258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:14.998579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:14.998608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.004013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.004323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.010422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.010734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.010762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.016683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.017013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.017043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.023630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.023928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.023964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.029487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.029804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.034817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.035134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.035163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.040180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.040543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.045369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.045651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.045680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.050609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.050923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.050975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.055864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.056200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.056229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.062246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.062336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.062362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.068718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.069022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.069051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.074624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.075064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.075108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.079913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.080244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.080273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.085326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.085658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.085686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.090691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.091034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.091063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.096756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.097071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.097099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.103369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.103728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.103756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.109472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.109802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.109830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.115355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.115652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.115680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.122553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.122878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.122908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.128488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.128828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.128865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.134484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.134779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.134808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.018 [2024-07-15 23:51:15.140127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.018 [2024-07-15 23:51:15.140428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-07-15 23:51:15.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.147575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.148014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.148057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.154631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.154982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.155021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.161693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.162096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.162126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.169051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.169348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.169377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.176706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.177029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.182273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.182630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.187617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.187919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.187949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.193289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.193619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.193647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.199719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.200023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.200053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.205201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.205534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.205577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.210636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.211017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.211045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.216137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.216473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.216501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.221566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.221899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.221928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.226981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.227298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.227327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.232222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.232526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.232562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.237822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.238257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.244395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.244709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.244738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.250063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.250401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.255408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.255728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.255758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.277 [2024-07-15 23:51:15.261287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.277 [2024-07-15 23:51:15.261596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.277 [2024-07-15 23:51:15.261626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.266872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.267205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.267235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.272799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.273145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.273175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.279031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.279332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.279362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.284696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.285063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.285105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.290242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.290554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.295330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.295657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.295687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.300590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.300887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.300917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.306352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.306647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.306674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.312735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.313099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.313128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.318721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.318806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.318834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.325218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.325528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.325556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.331388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.331701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.331730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.337970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.338280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.338309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.344118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.344446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.344475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.350396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.350714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.350743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.356499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.356596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.356621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.362838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.363175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.363205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.369431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.369757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.369788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.376323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.376649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.376679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.384000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.384338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.390815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.391153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.391188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.278 [2024-07-15 23:51:15.396684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.278 [2024-07-15 23:51:15.396990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.278 [2024-07-15 23:51:15.397020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.401826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.402133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.407109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.407437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.407467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.413488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.413805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.413833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.419125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.419438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.424856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.430154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.430472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.430500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.536 [2024-07-15 23:51:15.435888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.536 [2024-07-15 23:51:15.436216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.536 [2024-07-15 23:51:15.436245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.441094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.441434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.441462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.446924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.447255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.447283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.453042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.453349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.453378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.459434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.459753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.459781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.465531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.465613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.465641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.471983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.472280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.472323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.477305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.477631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.477659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.482641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.482936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.482975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.487647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.487986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.488016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.493348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.493689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.493719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.498980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.499317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.499345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.504579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.510367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.510662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.510691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.515471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.515765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.515794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.521650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.521992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.522021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.528453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.528748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.528777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.534741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.535074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.541916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.542224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.542259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.549235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.549533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.549563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.556714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.557017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.557046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.564093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.564407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.564436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.571055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.571363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.571393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.577990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.578327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.578371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.584669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.584982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.585017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.591544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.591855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.591885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.598621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.598919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.598949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.605822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.606026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.606053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.613449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.537 [2024-07-15 23:51:15.613776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.537 [2024-07-15 23:51:15.613806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.537 [2024-07-15 23:51:15.620680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.621014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.621044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.538 [2024-07-15 23:51:15.627443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.627783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.627814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.538 [2024-07-15 23:51:15.634677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.634978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.635008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.538 [2024-07-15 23:51:15.642343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.642643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.642672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.538 [2024-07-15 23:51:15.649423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.649718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.649748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.538 [2024-07-15 23:51:15.656503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.538 [2024-07-15 23:51:15.656815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.538 [2024-07-15 23:51:15.656844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.664369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.664670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.664708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.671674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.672116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.672144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.679309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.679624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.679653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.686986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.687287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.694578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.694896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.694941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.701403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.701712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.701741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.708309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.708620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.708649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.714160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.714458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.714487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.719597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.725251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.725593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.730890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.731221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.731250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.737127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.737435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.737464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.742338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.742643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.742672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.797 [2024-07-15 23:51:15.747592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.797 [2024-07-15 23:51:15.747887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.797 [2024-07-15 23:51:15.747917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.752683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.753024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.758827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.759172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.759201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.764806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.765110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.765140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.770112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.770409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.770438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.775104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.775388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.775418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.780545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.780869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.785742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.786052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.786081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.792072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.792397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.798308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.798602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.804490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.804847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.810618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.810946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.810980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.816347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.816644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.816673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.821477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.821573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.821609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.826824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.827124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.827170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.832037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.832317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.832347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.837512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.837820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.837848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.843767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.849081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.854321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.854615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.854645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.859534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.859831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.859860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.864405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.864734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.864761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.869398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.869731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.874688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.874992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.875022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.881228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.881526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.887376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.887702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.887731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.894126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.894466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.894496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.901236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.901561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.901590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.908468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.908766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.908794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.798 [2024-07-15 23:51:15.916284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:40.798 [2024-07-15 23:51:15.916708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.798 [2024-07-15 23:51:15.916738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.923509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.923693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.923721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.930635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.930962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.930992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.937179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.937487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.937515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.944063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.944360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.944389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.950849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.951152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.951182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.957806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.958112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.958142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.964736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.965041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.965074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.972098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.972427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.972458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.978894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.979198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.979228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.985918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.986248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.986290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:15.993334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:15.993685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:15.993715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.000886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.001185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.001215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.007511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.007817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.007846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.012972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.013275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.013304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.018499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.018795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.018825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.024138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.024435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.024464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.029041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.029348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.029377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.034002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.034299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.039056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.039357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.039387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.044271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.044670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.044715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.049689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.049998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.055359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.055679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.055708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.061935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.062285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.062315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.067447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.067840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.067869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.072796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.073099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.073128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.078051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.078360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.083458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.058 [2024-07-15 23:51:16.083754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.058 [2024-07-15 23:51:16.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.058 [2024-07-15 23:51:16.088688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.089005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.089033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.093905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.094208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.094253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.099081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.099393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.099435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.104221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.104577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.109636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.109944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.109980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.115819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.116149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.116191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.121429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.121797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.126763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.127068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.127097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.132021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.132338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.132382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.137364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.137660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.137689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.142475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.142769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.142799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.148447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.148774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.154363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.154689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.154718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.159495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.159602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.159629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.164743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.165049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.165078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.170232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.170521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.059 [2024-07-15 23:51:16.176609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.059 [2024-07-15 23:51:16.176907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.059 [2024-07-15 23:51:16.176936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.182284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.182584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.187580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.187877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.187907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.193417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.193846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.193875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.198581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.198907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.198936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.204432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.204728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.204759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.209609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.209983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.214881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.215226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.220006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.220303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.220332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.226085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.226410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.226459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.232588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.232913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.239655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.239998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.240041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.247724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.248092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.248136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.255529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.255854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.255885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.263411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.263736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.263766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.271371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.271701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.271730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.279384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.279681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.279710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.287278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.287595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.295018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.295340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.318 [2024-07-15 23:51:16.295382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.318 [2024-07-15 23:51:16.303046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.318 [2024-07-15 23:51:16.303379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.303408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.310779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.311080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.311109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.318271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.318464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.318492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.325532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.325917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.325946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.332325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.332619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.332648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.338242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.338620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.338657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.345134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.345525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.345554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.352664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.353052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.353081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.360069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.360472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.360502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.367681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.368002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.368031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.374650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.374945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.374981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.381688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.382068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.382098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.389345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.389756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.396647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.397038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.397068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.404289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.404613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.411499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.411836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.411865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.419272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.419656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.419715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.426756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.427106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.427135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.434264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.319 [2024-07-15 23:51:16.434654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.319 [2024-07-15 23:51:16.434683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.319 [2024-07-15 23:51:16.441781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.578 [2024-07-15 23:51:16.442179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.578 [2024-07-15 23:51:16.442209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.578 [2024-07-15 23:51:16.449382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.449696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.449726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.456858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.457219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.457263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.464523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.464824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.464853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.471575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.471857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.471887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.478522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.478888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.485572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.485831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.485860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.492240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.492522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.492552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.499188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.499522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.499551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.506141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.506433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.506463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.512832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.513107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.513137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.518842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.519153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.519183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.525822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.526172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.526201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.533130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.533419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.533448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.539910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.540236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.540278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.547265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.547600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.547630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.554643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.554931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.554966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.561264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.561499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.561542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.566603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.566850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.571202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.571451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.571480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.575967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.576205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.576249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.580939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.581182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.581211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.585851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.586091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.586121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.590824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.591067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.591101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.595832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.596083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.596128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.600968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.601216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.601245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.605804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.606045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.606075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.610522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.610756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.610785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.615254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.615518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.615549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.620092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.579 [2024-07-15 23:51:16.620344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.579 [2024-07-15 23:51:16.620372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.579 [2024-07-15 23:51:16.624913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.625171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.629813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.630071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.630100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.634462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.634696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.634724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.640230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.640467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.640497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.645662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.645924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.645952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.650589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.650823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.650867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.655444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.655679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.655708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.660180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.660458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.660486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.665011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.665263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.665306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.669823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.670094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.670124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.674691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.674938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.674992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.679513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.679762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.679790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.684435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.684729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.684759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.689136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.689385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.694694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.694930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.694965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.580 [2024-07-15 23:51:16.700461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.580 [2024-07-15 23:51:16.700712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.580 [2024-07-15 23:51:16.700741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.705306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.705546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.705574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.709943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.710188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.710217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.715055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.715307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.715335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.720913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.721182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.721212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.726215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.726449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.726493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.731439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.731674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.736039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.736264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.736293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.741558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.741833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.741862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.746544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.746757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.746786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.750887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.751148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.755139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.755348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.755376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.759924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.760151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.764609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.764850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.764880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.769962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.770225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.770255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.775474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.775729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.781356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.781648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.781678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.787564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.787871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.787900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.793568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.793841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.793870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.798540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.798765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.798794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.802852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.803071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.803099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.807040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.807295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.811195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.811430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.815494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.815702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.815730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.819720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.819926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.819953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.824805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.825036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.825063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.829645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.829851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.839 [2024-07-15 23:51:16.829893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.839 [2024-07-15 23:51:16.834014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.839 [2024-07-15 23:51:16.834222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.834265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.838414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.838622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.838649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.842833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.843049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.843076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.847088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.847322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.847367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.851403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.851667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.851697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.856130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.856365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.840 [2024-07-15 23:51:16.860374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831af0) with pdu=0x2000190fef90 00:24:41.840 [2024-07-15 23:51:16.860537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.840 [2024-07-15 23:51:16.860564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.840 00:24:41.840 Latency(us) 00:24:41.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.840 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:41.840 nvme0n1 : 2.00 5190.71 648.84 0.00 0.00 3075.28 1881.13 8349.77 00:24:41.840 =================================================================================================================== 00:24:41.840 Total : 5190.71 648.84 0.00 0.00 3075.28 1881.13 8349.77 00:24:41.840 0 00:24:41.840 23:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:41.840 23:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:41.840 23:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:41.840 23:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:41.840 | .driver_specific 00:24:41.840 | .nvme_error 00:24:41.840 | .status_code 00:24:41.840 | .command_transient_transport_error' 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 335 > 0 )) 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3881913 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3881913 ']' 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3881913 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3881913 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:42.097 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3881913' 00:24:42.097 killing process with pid 3881913 00:24:42.098 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3881913 00:24:42.098 Received shutdown signal, test time was about 2.000000 seconds 00:24:42.098 00:24:42.098 Latency(us) 00:24:42.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.098 =================================================================================================================== 00:24:42.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.098 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3881913 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3880036 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3880036 ']' 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3880036 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880036 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880036' 00:24:42.355 killing process with pid 3880036 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3880036 00:24:42.355 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3880036 00:24:42.613 00:24:42.613 real 0m15.386s 00:24:42.613 user 0m30.662s 00:24:42.613 sys 0m4.092s 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.613 ************************************ 00:24:42.613 END TEST nvmf_digest_error 00:24:42.613 ************************************ 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.613 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.613 rmmod nvme_tcp 00:24:42.613 rmmod nvme_fabrics 00:24:42.872 rmmod nvme_keyring 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3880036 ']' 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3880036 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3880036 ']' 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3880036 00:24:42.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3880036) - No such process 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3880036 is not found' 00:24:42.872 Process with pid 3880036 is not found 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.872 23:51:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.776 23:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.776 00:24:44.776 real 0m35.189s 00:24:44.776 user 1m2.007s 00:24:44.776 sys 0m9.698s 00:24:44.776 23:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.776 23:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 ************************************ 00:24:44.776 END TEST nvmf_digest 00:24:44.776 ************************************ 00:24:44.776 23:51:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:44.776 23:51:19 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:44.776 23:51:19 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:44.776 23:51:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:44.776 23:51:19 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:44.776 23:51:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.776 23:51:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.776 23:51:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 ************************************ 00:24:44.776 START TEST nvmf_bdevperf 00:24:44.776 ************************************ 00:24:44.776 23:51:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:45.033 * Looking for test storage... 00:24:45.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.033 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.034 23:51:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:46.934 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:46.934 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:46.934 Found net devices under 0000:09:00.0: cvl_0_0 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:46.934 Found net devices under 0000:09:00.1: cvl_0_1 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.934 23:51:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.934 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.934 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.934 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.934 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:24:47.193 00:24:47.193 --- 10.0.0.2 ping statistics --- 00:24:47.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.193 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:47.193 00:24:47.193 --- 10.0.0.1 ping statistics --- 00:24:47.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.193 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3884383 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3884383 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3884383 ']' 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.193 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.193 [2024-07-15 23:51:22.155171] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:47.193 [2024-07-15 23:51:22.155263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.193 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.193 [2024-07-15 23:51:22.218339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.451 [2024-07-15 23:51:22.329283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.451 [2024-07-15 23:51:22.329333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.451 [2024-07-15 23:51:22.329362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.451 [2024-07-15 23:51:22.329373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.451 [2024-07-15 23:51:22.329386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.451 [2024-07-15 23:51:22.329472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.451 [2024-07-15 23:51:22.329535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.451 [2024-07-15 23:51:22.329539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 [2024-07-15 23:51:22.475110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 Malloc0 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.451 [2024-07-15 23:51:22.534900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.451 { 00:24:47.451 "params": { 00:24:47.451 "name": "Nvme$subsystem", 00:24:47.451 "trtype": "$TEST_TRANSPORT", 00:24:47.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.451 "adrfam": "ipv4", 00:24:47.451 "trsvcid": "$NVMF_PORT", 00:24:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.451 "hdgst": ${hdgst:-false}, 00:24:47.451 "ddgst": ${ddgst:-false} 00:24:47.451 }, 00:24:47.451 "method": "bdev_nvme_attach_controller" 00:24:47.451 } 00:24:47.451 EOF 00:24:47.451 )") 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:47.451 23:51:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:47.451 "params": { 00:24:47.451 "name": "Nvme1", 00:24:47.452 "trtype": "tcp", 00:24:47.452 "traddr": "10.0.0.2", 00:24:47.452 "adrfam": "ipv4", 00:24:47.452 "trsvcid": "4420", 00:24:47.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.452 "hdgst": false, 00:24:47.452 "ddgst": false 00:24:47.452 }, 00:24:47.452 "method": "bdev_nvme_attach_controller" 00:24:47.452 }' 00:24:47.710 [2024-07-15 23:51:22.584343] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:47.710 [2024-07-15 23:51:22.584412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884407 ] 00:24:47.710 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.710 [2024-07-15 23:51:22.643352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.710 [2024-07-15 23:51:22.756091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.968 Running I/O for 1 seconds... 00:24:48.898 00:24:48.899 Latency(us) 00:24:48.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.899 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:48.899 Verification LBA range: start 0x0 length 0x4000 00:24:48.899 Nvme1n1 : 1.01 8869.19 34.65 0.00 0.00 14345.88 3034.07 15437.37 00:24:48.899 =================================================================================================================== 00:24:48.899 Total : 8869.19 34.65 0.00 0.00 14345.88 3034.07 15437.37 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3884557 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.157 { 00:24:49.157 "params": { 00:24:49.157 "name": "Nvme$subsystem", 00:24:49.157 "trtype": "$TEST_TRANSPORT", 00:24:49.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.157 "adrfam": "ipv4", 00:24:49.157 "trsvcid": "$NVMF_PORT", 00:24:49.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.157 "hdgst": ${hdgst:-false}, 00:24:49.157 "ddgst": ${ddgst:-false} 00:24:49.157 }, 00:24:49.157 "method": "bdev_nvme_attach_controller" 00:24:49.157 } 00:24:49.157 EOF 00:24:49.157 )") 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:49.157 23:51:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:49.157 "params": { 00:24:49.157 "name": "Nvme1", 00:24:49.157 "trtype": "tcp", 00:24:49.157 "traddr": "10.0.0.2", 00:24:49.157 "adrfam": "ipv4", 00:24:49.157 "trsvcid": "4420", 00:24:49.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.157 "hdgst": false, 00:24:49.157 "ddgst": false 00:24:49.157 }, 00:24:49.157 "method": "bdev_nvme_attach_controller" 00:24:49.157 }' 00:24:49.415 [2024-07-15 23:51:24.288614] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:49.415 [2024-07-15 23:51:24.288690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884557 ] 00:24:49.415 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.415 [2024-07-15 23:51:24.351369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.415 [2024-07-15 23:51:24.460854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.673 Running I/O for 15 seconds... 00:24:52.203 23:51:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3884383 00:24:52.203 23:51:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:52.203 [2024-07-15 23:51:27.256117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.203 [2024-07-15 23:51:27.256686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.203 [2024-07-15 23:51:27.256700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.256920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.256934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.257970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.257986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.258014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.258031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.258045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.258060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.204 [2024-07-15 23:51:27.258073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.204 [2024-07-15 23:51:27.258088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.205 [2024-07-15 23:51:27.258462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.205 [2024-07-15 23:51:27.258487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.258966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.258981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.205 [2024-07-15 23:51:27.259343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.205 [2024-07-15 23:51:27.259362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.259580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.259975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.259991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.206 [2024-07-15 23:51:27.260011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.260027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.206 [2024-07-15 23:51:27.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.260056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20344c0 is same with the state(5) to be set 00:24:52.206 [2024-07-15 23:51:27.260072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.206 [2024-07-15 23:51:27.260084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.206 [2024-07-15 23:51:27.260095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46504 len:8 PRP1 0x0 PRP2 0x0 00:24:52.206 [2024-07-15 23:51:27.260108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.206 [2024-07-15 23:51:27.260166] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20344c0 was disconnected and freed. reset controller. 00:24:52.206 [2024-07-15 23:51:27.263358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.206 [2024-07-15 23:51:27.263434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.206 [2024-07-15 23:51:27.265050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.206 [2024-07-15 23:51:27.265105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.206 [2024-07-15 23:51:27.265124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.206 [2024-07-15 23:51:27.265384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.206 [2024-07-15 23:51:27.265580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.206 [2024-07-15 23:51:27.265599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.206 [2024-07-15 23:51:27.265614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.206 [2024-07-15 23:51:27.268564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.206 [2024-07-15 23:51:27.276731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.206 [2024-07-15 23:51:27.277182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.206 [2024-07-15 23:51:27.277225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.206 [2024-07-15 23:51:27.277242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.206 [2024-07-15 23:51:27.277497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.206 [2024-07-15 23:51:27.277691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.206 [2024-07-15 23:51:27.277709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.206 [2024-07-15 23:51:27.277722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.206 [2024-07-15 23:51:27.280620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.206 [2024-07-15 23:51:27.289934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.206 [2024-07-15 23:51:27.290474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.206 [2024-07-15 23:51:27.290517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.206 [2024-07-15 23:51:27.290533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.206 [2024-07-15 23:51:27.290797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.206 [2024-07-15 23:51:27.291021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.206 [2024-07-15 23:51:27.291042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.206 [2024-07-15 23:51:27.291055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.206 [2024-07-15 23:51:27.293889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.206 [2024-07-15 23:51:27.303080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.206 [2024-07-15 23:51:27.303498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.206 [2024-07-15 23:51:27.303526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.206 [2024-07-15 23:51:27.303541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.206 [2024-07-15 23:51:27.303757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.206 [2024-07-15 23:51:27.303993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.206 [2024-07-15 23:51:27.304027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.206 [2024-07-15 23:51:27.304041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.206 [2024-07-15 23:51:27.306951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.207 [2024-07-15 23:51:27.316146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.207 [2024-07-15 23:51:27.316494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.207 [2024-07-15 23:51:27.316522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.207 [2024-07-15 23:51:27.316538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.207 [2024-07-15 23:51:27.316773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.207 [2024-07-15 23:51:27.316991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.207 [2024-07-15 23:51:27.317012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.207 [2024-07-15 23:51:27.317024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.207 [2024-07-15 23:51:27.319994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.329498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.329877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.329933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.330214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.330441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.467 [2024-07-15 23:51:27.330460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.467 [2024-07-15 23:51:27.330472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.467 [2024-07-15 23:51:27.333706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.342692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.343127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.343156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.343172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.343413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.343621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.467 [2024-07-15 23:51:27.343640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.467 [2024-07-15 23:51:27.343652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.467 [2024-07-15 23:51:27.346558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.355688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.356097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.356125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.356140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.356377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.356570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.467 [2024-07-15 23:51:27.356589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.467 [2024-07-15 23:51:27.356600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.467 [2024-07-15 23:51:27.359497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.368805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.369288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.369317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.369332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.369601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.369794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.467 [2024-07-15 23:51:27.369813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.467 [2024-07-15 23:51:27.369829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.467 [2024-07-15 23:51:27.372690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.381817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.382257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.382299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.382316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.382556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.382765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.467 [2024-07-15 23:51:27.382784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.467 [2024-07-15 23:51:27.382796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.467 [2024-07-15 23:51:27.385694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.467 [2024-07-15 23:51:27.395035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.467 [2024-07-15 23:51:27.395491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.467 [2024-07-15 23:51:27.395518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.467 [2024-07-15 23:51:27.395549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.467 [2024-07-15 23:51:27.395790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.467 [2024-07-15 23:51:27.396026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.396046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.396059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.398855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.408175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.408575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.408618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.408633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.408898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.409120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.409140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.409152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.412046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.421233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.421560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.421592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.421607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.421822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.422059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.422079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.422092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.425003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.434359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.434860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.434902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.434918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.435167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.435393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.435413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.435425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.438319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.447468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.447982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.448029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.448044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.448308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.448500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.448519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.448531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.451311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.460639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.461015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.461043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.461059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.461299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.461513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.461532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.461544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.464400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.473733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.474160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.474201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.474217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.474455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.474663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.474682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.474693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.477589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.486822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.487263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.487305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.468 [2024-07-15 23:51:27.487321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.468 [2024-07-15 23:51:27.487561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.468 [2024-07-15 23:51:27.487769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.468 [2024-07-15 23:51:27.487787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.468 [2024-07-15 23:51:27.487799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.468 [2024-07-15 23:51:27.490694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.468 [2024-07-15 23:51:27.500036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.468 [2024-07-15 23:51:27.500432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.468 [2024-07-15 23:51:27.500474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.500489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.500759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.500978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.500999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.501011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.503848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.513173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.513604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.513649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.513665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.513921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.514157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.514179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.514192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.517790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.527021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.527466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.527496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.527513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.527727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.528020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.528042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.528056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.531071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.540332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.540661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.540688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.540703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.540919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.541146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.541168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.541180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.544137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.553498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.553936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.553985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.554006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.554262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.554456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.554474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.554486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.557478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.566746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.567143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.567172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.567188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.567429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.567628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.567647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.567660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.570642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.469 [2024-07-15 23:51:27.580077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.469 [2024-07-15 23:51:27.580533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.469 [2024-07-15 23:51:27.580575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.469 [2024-07-15 23:51:27.580592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.469 [2024-07-15 23:51:27.580830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.469 [2024-07-15 23:51:27.581086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.469 [2024-07-15 23:51:27.581107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.469 [2024-07-15 23:51:27.581120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.469 [2024-07-15 23:51:27.584049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.781 [2024-07-15 23:51:27.594505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.781 [2024-07-15 23:51:27.594943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.781 [2024-07-15 23:51:27.594994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.781 [2024-07-15 23:51:27.595038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.781 [2024-07-15 23:51:27.595347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.781 [2024-07-15 23:51:27.595666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.781 [2024-07-15 23:51:27.595702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.781 [2024-07-15 23:51:27.595743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.781 [2024-07-15 23:51:27.600163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.781 [2024-07-15 23:51:27.609001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.781 [2024-07-15 23:51:27.609412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.781 [2024-07-15 23:51:27.609456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.781 [2024-07-15 23:51:27.609474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.781 [2024-07-15 23:51:27.609715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.781 [2024-07-15 23:51:27.609914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.781 [2024-07-15 23:51:27.609934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.781 [2024-07-15 23:51:27.609946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.781 [2024-07-15 23:51:27.612901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.781 [2024-07-15 23:51:27.622223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.781 [2024-07-15 23:51:27.622648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.781 [2024-07-15 23:51:27.622676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.781 [2024-07-15 23:51:27.622692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.781 [2024-07-15 23:51:27.622914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.781 [2024-07-15 23:51:27.623155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.781 [2024-07-15 23:51:27.623176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.781 [2024-07-15 23:51:27.623189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.781 [2024-07-15 23:51:27.626176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.781 [2024-07-15 23:51:27.635419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.781 [2024-07-15 23:51:27.635815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.781 [2024-07-15 23:51:27.635858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.781 [2024-07-15 23:51:27.635875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.781 [2024-07-15 23:51:27.636129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.781 [2024-07-15 23:51:27.636349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.636368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.636381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.639377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.648666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.649045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.649075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.649091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.649337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.649545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.649564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.649576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.652552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.662066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.662498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.662542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.662778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.663016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.663037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.663050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.666029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.675668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.676068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.676111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.676126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.676391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.676585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.676604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.676616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.679515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.688858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.689259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.689312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.689328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.689599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.689792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.689811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.689823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.692724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.702082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.702544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.702587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.702603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.702859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.703083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.703103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.703116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.706066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.715274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.715687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.715715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.715731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.715951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.716159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.716179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.716191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.719096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.728447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.728765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.728791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.728806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.729025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.729239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.729259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.729291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.732175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.741679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.742121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.742149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.742165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.742408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.742616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.742636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.742647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.745559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.754873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.755272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.755315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.755330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.755564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.755773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.755802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.755814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.758713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.768084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.768507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.768548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.768563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.782 [2024-07-15 23:51:27.768830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.782 [2024-07-15 23:51:27.769081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.782 [2024-07-15 23:51:27.769103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.782 [2024-07-15 23:51:27.769117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.782 [2024-07-15 23:51:27.772646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.782 [2024-07-15 23:51:27.781337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.782 [2024-07-15 23:51:27.781718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.782 [2024-07-15 23:51:27.781761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.782 [2024-07-15 23:51:27.781777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.782050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.782271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.782290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.782302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.785259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.794597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.795034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.795063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.795079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.795322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.795514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.795533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.795545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.798442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.807767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.808200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.808227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.808243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.808463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.808671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.808690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.808701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.811595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.820876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.821274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.821317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.821331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.821579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.821792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.821811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.821823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.824755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.834106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.834497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.834524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.834540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.834778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.835014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.835034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.835046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.837838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.847122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.847493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.847520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.847535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.847771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.847989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.848009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.848021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.850794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.783 [2024-07-15 23:51:27.860603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.783 [2024-07-15 23:51:27.861027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.783 [2024-07-15 23:51:27.861055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:52.783 [2024-07-15 23:51:27.861071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:52.783 [2024-07-15 23:51:27.861284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:52.783 [2024-07-15 23:51:27.861516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.783 [2024-07-15 23:51:27.861537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.783 [2024-07-15 23:51:27.861550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.783 [2024-07-15 23:51:27.864717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.042 [2024-07-15 23:51:27.874152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.042 [2024-07-15 23:51:27.874572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.042 [2024-07-15 23:51:27.874614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.042 [2024-07-15 23:51:27.874630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.042 [2024-07-15 23:51:27.874852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.042 [2024-07-15 23:51:27.875094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.042 [2024-07-15 23:51:27.875115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.042 [2024-07-15 23:51:27.875127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.042 [2024-07-15 23:51:27.878109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.042 [2024-07-15 23:51:27.887352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.042 [2024-07-15 23:51:27.887741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.042 [2024-07-15 23:51:27.887782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.042 [2024-07-15 23:51:27.887798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.042 [2024-07-15 23:51:27.888046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.888246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.888265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.888291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.891172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.900509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.900882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.900925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.900940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.901200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.901410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.901430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.901442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.904338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.913765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.914192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.914225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.914241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.914494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.914686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.914705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.914717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.917632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.926949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.927326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.927354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.927369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.927603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.927811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.927830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.927842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.930735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.940119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.940513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.940555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.940570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.940819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.941064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.941086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.941099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.943940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.953120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.953507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.953547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.953564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.953784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.954025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.954046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.954058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.956928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.966213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.966551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.966578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.966594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.966814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.967032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.967052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.967064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.969840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.979340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.979706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.979733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.979748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.979995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.980194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.980213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.980225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.043 [2024-07-15 23:51:27.983118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.043 [2024-07-15 23:51:27.992312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.043 [2024-07-15 23:51:27.992646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.043 [2024-07-15 23:51:27.992673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.043 [2024-07-15 23:51:27.992688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.043 [2024-07-15 23:51:27.992910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.043 [2024-07-15 23:51:27.993146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.043 [2024-07-15 23:51:27.993167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.043 [2024-07-15 23:51:27.993179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:27.996076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.005460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.005833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.005875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.005891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.006152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.006366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.006385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.006397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.009291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.018505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.018914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.018942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.018967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.019198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.019450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.019471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.019484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.023110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.031651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.032094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.032137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.032153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.032391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.032599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.032618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.032630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.035609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.044798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.045203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.045246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.045267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.045501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.045711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.045730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.045742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.048683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.058035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.058468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.058510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.058526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.058760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.058981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.059001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.059014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.061849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.071307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.071682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.071711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.071726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.071965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.072177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.072198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.072211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.075199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.084529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.084858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.084928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.084943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.085202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.085412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.085435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.085448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.088380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.097554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.097930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.097964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.097998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.044 [2024-07-15 23:51:28.098255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.044 [2024-07-15 23:51:28.098465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.044 [2024-07-15 23:51:28.098484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.044 [2024-07-15 23:51:28.098496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.044 [2024-07-15 23:51:28.101396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.044 [2024-07-15 23:51:28.110713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.044 [2024-07-15 23:51:28.111126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.044 [2024-07-15 23:51:28.111155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.044 [2024-07-15 23:51:28.111171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.045 [2024-07-15 23:51:28.111411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.045 [2024-07-15 23:51:28.111619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.045 [2024-07-15 23:51:28.111638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.045 [2024-07-15 23:51:28.111650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.045 [2024-07-15 23:51:28.114513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.045 [2024-07-15 23:51:28.123808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.045 [2024-07-15 23:51:28.124186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.045 [2024-07-15 23:51:28.124214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.045 [2024-07-15 23:51:28.124229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.045 [2024-07-15 23:51:28.124464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.045 [2024-07-15 23:51:28.124672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.045 [2024-07-15 23:51:28.124691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.045 [2024-07-15 23:51:28.124703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.045 [2024-07-15 23:51:28.127487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.045 [2024-07-15 23:51:28.136938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.045 [2024-07-15 23:51:28.137392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.045 [2024-07-15 23:51:28.137435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.045 [2024-07-15 23:51:28.137451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.045 [2024-07-15 23:51:28.137693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.045 [2024-07-15 23:51:28.137900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.045 [2024-07-15 23:51:28.137919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.045 [2024-07-15 23:51:28.137931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.045 [2024-07-15 23:51:28.140832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.045 [2024-07-15 23:51:28.150173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.045 [2024-07-15 23:51:28.150582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.045 [2024-07-15 23:51:28.150623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.045 [2024-07-15 23:51:28.150639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.045 [2024-07-15 23:51:28.150860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.045 [2024-07-15 23:51:28.151099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.045 [2024-07-15 23:51:28.151119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.045 [2024-07-15 23:51:28.151132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.045 [2024-07-15 23:51:28.154033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.045 [2024-07-15 23:51:28.163499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.045 [2024-07-15 23:51:28.163914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.045 [2024-07-15 23:51:28.163942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.045 [2024-07-15 23:51:28.163967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.045 [2024-07-15 23:51:28.164197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.045 [2024-07-15 23:51:28.164444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.045 [2024-07-15 23:51:28.164463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.045 [2024-07-15 23:51:28.164475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.304 [2024-07-15 23:51:28.167682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.304 [2024-07-15 23:51:28.176694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.304 [2024-07-15 23:51:28.177058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.304 [2024-07-15 23:51:28.177086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.304 [2024-07-15 23:51:28.177102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.304 [2024-07-15 23:51:28.177348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.304 [2024-07-15 23:51:28.177557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.304 [2024-07-15 23:51:28.177576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.304 [2024-07-15 23:51:28.177588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.304 [2024-07-15 23:51:28.180485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.304 [2024-07-15 23:51:28.189721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.304 [2024-07-15 23:51:28.190158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.304 [2024-07-15 23:51:28.190200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.304 [2024-07-15 23:51:28.190216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.304 [2024-07-15 23:51:28.190455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.304 [2024-07-15 23:51:28.190663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.304 [2024-07-15 23:51:28.190682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.304 [2024-07-15 23:51:28.190694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.304 [2024-07-15 23:51:28.193514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.304 [2024-07-15 23:51:28.202808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.304 [2024-07-15 23:51:28.203190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.203233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.203248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.203513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.203705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.203724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.203736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.206522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.215851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.216276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.216320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.216335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.216603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.216796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.216815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.216831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.219773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.229098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.229485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.229526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.229742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.229976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.229997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.230009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.232807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.242106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.242551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.242578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.242594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.242830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.243068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.243088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.243100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.245995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.255220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.255595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.255637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.255652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.255919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.256142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.256162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.256175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.259171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.268389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.268734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.268762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.268778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.269034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.269263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.269285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.269313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.272882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.281745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.282115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.282144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.282160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.282392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.282601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.282620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.282632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.285795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.294899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.295239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.295282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.295297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.295518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.295727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.295746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.295757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.298659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.308005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.308382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.308425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.308440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.308705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.308903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.308922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.308934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.311832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.321177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.321534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.321561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.321576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.321798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.322037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.322058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.322070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.305 [2024-07-15 23:51:28.324980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.305 [2024-07-15 23:51:28.334314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.305 [2024-07-15 23:51:28.334651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.305 [2024-07-15 23:51:28.334679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.305 [2024-07-15 23:51:28.334694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.305 [2024-07-15 23:51:28.334916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.305 [2024-07-15 23:51:28.335160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.305 [2024-07-15 23:51:28.335181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.305 [2024-07-15 23:51:28.335194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.338108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.347474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.347917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.347963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.347981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.348223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.348432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.348450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.348462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.351295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.360705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.361083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.361126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.361141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.361406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.361599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.361617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.361629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.364541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.373668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.374057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.374099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.374115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.374337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.374545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.374563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.374575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.377500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.386815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.387218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.387276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.387543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.387736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.387754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.387766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.390634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.399979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.400382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.400407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.400441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.400656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.400865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.400884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.400895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.403798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.413137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.413591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.413633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.413649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.413891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.306 [2024-07-15 23:51:28.414130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.306 [2024-07-15 23:51:28.414150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.306 [2024-07-15 23:51:28.414163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.306 [2024-07-15 23:51:28.417061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.306 [2024-07-15 23:51:28.426865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.306 [2024-07-15 23:51:28.427289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.306 [2024-07-15 23:51:28.427317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.306 [2024-07-15 23:51:28.427333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.306 [2024-07-15 23:51:28.427561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.565 [2024-07-15 23:51:28.427800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.565 [2024-07-15 23:51:28.427836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.565 [2024-07-15 23:51:28.427849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.565 [2024-07-15 23:51:28.430768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.565 [2024-07-15 23:51:28.440052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.565 [2024-07-15 23:51:28.440441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.565 [2024-07-15 23:51:28.440483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.565 [2024-07-15 23:51:28.440499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.565 [2024-07-15 23:51:28.440765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.565 [2024-07-15 23:51:28.440986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.565 [2024-07-15 23:51:28.441011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.565 [2024-07-15 23:51:28.441024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.565 [2024-07-15 23:51:28.443900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.565 [2024-07-15 23:51:28.453279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.565 [2024-07-15 23:51:28.453662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.565 [2024-07-15 23:51:28.453703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.565 [2024-07-15 23:51:28.453719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.565 [2024-07-15 23:51:28.453941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.565 [2024-07-15 23:51:28.454159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.565 [2024-07-15 23:51:28.454178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.565 [2024-07-15 23:51:28.454190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.565 [2024-07-15 23:51:28.456986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.565 [2024-07-15 23:51:28.466307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.565 [2024-07-15 23:51:28.466741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.565 [2024-07-15 23:51:28.466769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.565 [2024-07-15 23:51:28.466784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.565 [2024-07-15 23:51:28.467032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.565 [2024-07-15 23:51:28.467241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.565 [2024-07-15 23:51:28.467260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.565 [2024-07-15 23:51:28.467271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.565 [2024-07-15 23:51:28.470052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.565 [2024-07-15 23:51:28.479400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.479825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.479882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.479897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.480174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.480385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.480404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.480416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.483313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.492639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.493012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.493040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.493056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.493298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.493506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.493525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.493537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.496434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.505729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.506106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.506149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.506164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.506432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.506625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.506644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.506656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.509597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.518890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.519257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.519285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.519301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.519515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.519753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.519775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.519788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.523364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.532160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.532568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.532610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.532633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.532868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.533112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.533134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.533146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.536180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.545435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.545808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.545850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.545866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.546146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.546357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.546376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.546388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.549287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.558463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.558900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.558964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.558982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.559243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.559436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.559454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.559466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.562249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.571545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.571977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.572025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.572040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.572288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.572495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.572518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.572530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.575314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.584645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.585079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.585107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.585122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.585359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.585567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.585586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.585598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.588505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.597865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.598300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.598326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.598355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.598577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.598784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.598803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.598815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.601677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.610972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.566 [2024-07-15 23:51:28.611472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.566 [2024-07-15 23:51:28.611513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.566 [2024-07-15 23:51:28.611530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.566 [2024-07-15 23:51:28.611798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.566 [2024-07-15 23:51:28.612018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.566 [2024-07-15 23:51:28.612038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.566 [2024-07-15 23:51:28.612051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.566 [2024-07-15 23:51:28.614927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.566 [2024-07-15 23:51:28.624095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.567 [2024-07-15 23:51:28.624598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.567 [2024-07-15 23:51:28.624625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.567 [2024-07-15 23:51:28.624656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.567 [2024-07-15 23:51:28.624920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.567 [2024-07-15 23:51:28.625141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.567 [2024-07-15 23:51:28.625162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.567 [2024-07-15 23:51:28.625174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.567 [2024-07-15 23:51:28.628070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.567 [2024-07-15 23:51:28.637130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.567 [2024-07-15 23:51:28.637489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.567 [2024-07-15 23:51:28.637546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.567 [2024-07-15 23:51:28.637560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.567 [2024-07-15 23:51:28.637789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.567 [2024-07-15 23:51:28.637992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.567 [2024-07-15 23:51:28.638012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.567 [2024-07-15 23:51:28.638024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.567 [2024-07-15 23:51:28.640813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.567 [2024-07-15 23:51:28.650188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.567 [2024-07-15 23:51:28.650529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.567 [2024-07-15 23:51:28.650557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.567 [2024-07-15 23:51:28.650573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.567 [2024-07-15 23:51:28.650816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.567 [2024-07-15 23:51:28.651053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.567 [2024-07-15 23:51:28.651073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.567 [2024-07-15 23:51:28.651086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.567 [2024-07-15 23:51:28.654097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.567 [2024-07-15 23:51:28.663445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.567 [2024-07-15 23:51:28.663945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.567 [2024-07-15 23:51:28.663995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.567 [2024-07-15 23:51:28.664011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.567 [2024-07-15 23:51:28.664286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.567 [2024-07-15 23:51:28.664479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.567 [2024-07-15 23:51:28.664499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.567 [2024-07-15 23:51:28.664510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.567 [2024-07-15 23:51:28.667519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.567 [2024-07-15 23:51:28.676627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.567 [2024-07-15 23:51:28.677002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.567 [2024-07-15 23:51:28.677031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.567 [2024-07-15 23:51:28.677047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.567 [2024-07-15 23:51:28.677289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.567 [2024-07-15 23:51:28.677498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.567 [2024-07-15 23:51:28.677517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.567 [2024-07-15 23:51:28.677529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.567 [2024-07-15 23:51:28.680427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.826 [2024-07-15 23:51:28.690209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.826 [2024-07-15 23:51:28.690658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-15 23:51:28.690700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.826 [2024-07-15 23:51:28.690716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.826 [2024-07-15 23:51:28.690976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.826 [2024-07-15 23:51:28.691176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.826 [2024-07-15 23:51:28.691196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.826 [2024-07-15 23:51:28.691208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.826 [2024-07-15 23:51:28.694419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.826 [2024-07-15 23:51:28.703537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.826 [2024-07-15 23:51:28.703906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-15 23:51:28.703933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.826 [2024-07-15 23:51:28.703949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.826 [2024-07-15 23:51:28.704217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.826 [2024-07-15 23:51:28.704432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.826 [2024-07-15 23:51:28.704451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.826 [2024-07-15 23:51:28.704494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.826 [2024-07-15 23:51:28.707596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.826 [2024-07-15 23:51:28.716739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.826 [2024-07-15 23:51:28.717172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-15 23:51:28.717199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.826 [2024-07-15 23:51:28.717230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.826 [2024-07-15 23:51:28.717472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.826 [2024-07-15 23:51:28.717671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.826 [2024-07-15 23:51:28.717690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.826 [2024-07-15 23:51:28.717702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.826 [2024-07-15 23:51:28.720572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.826 [2024-07-15 23:51:28.730084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.826 [2024-07-15 23:51:28.730478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.826 [2024-07-15 23:51:28.730518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.826 [2024-07-15 23:51:28.730533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.730774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.730996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.731016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.731029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.733904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.743291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.743667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.743710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.743726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.743984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.744199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.744218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.744231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.747125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.756463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.756971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.757021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.757037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.757319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.757512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.757531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.757543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.760475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.769780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.770139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.770168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.770184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.770397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.770638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.770660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.770673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.774214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.783199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.783627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.783655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.783671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.783939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.784180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.784202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.784216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.787379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.796788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.797215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.797244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.797260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.797499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.797712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.797741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.797753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.800731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.809985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.810444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.810503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.810742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.810973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.810994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.811027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.813890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.823174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.823694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.823754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.823769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.824027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.824226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.824246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.824273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.827182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.836285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.836723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.836765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.836781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.837042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.837242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.827 [2024-07-15 23:51:28.837261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.827 [2024-07-15 23:51:28.837287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.827 [2024-07-15 23:51:28.840228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.827 [2024-07-15 23:51:28.849474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.827 [2024-07-15 23:51:28.849896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.827 [2024-07-15 23:51:28.849938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.827 [2024-07-15 23:51:28.849962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.827 [2024-07-15 23:51:28.850233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.827 [2024-07-15 23:51:28.850442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.850461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.850473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.853411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.862662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.863058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.863087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.863103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.863331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.863540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.863559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.863571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.866484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.875809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.876281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.876323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.876338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.876592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.876784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.876804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.876816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.879717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.888987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.889379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.889433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.889453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.889703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.889896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.889915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.889927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.892862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.902197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.902592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.902635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.902650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.902917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.903141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.903161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.903174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.906104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.915414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.915800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.915841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.915857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.916122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.916354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.916373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.916385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.919324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.928556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.928927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.928976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.928993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.929248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.929458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.929481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.929494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.932466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.828 [2024-07-15 23:51:28.941918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.828 [2024-07-15 23:51:28.942317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.828 [2024-07-15 23:51:28.942346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:53.828 [2024-07-15 23:51:28.942361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:53.828 [2024-07-15 23:51:28.942588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:53.828 [2024-07-15 23:51:28.942802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.828 [2024-07-15 23:51:28.942821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.828 [2024-07-15 23:51:28.942833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.828 [2024-07-15 23:51:28.946134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.088 [2024-07-15 23:51:28.955602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.088 [2024-07-15 23:51:28.955980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.088 [2024-07-15 23:51:28.956013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.088 [2024-07-15 23:51:28.956030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.088 [2024-07-15 23:51:28.956260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.088 [2024-07-15 23:51:28.956476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.088 [2024-07-15 23:51:28.956495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.088 [2024-07-15 23:51:28.956508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.088 [2024-07-15 23:51:28.959647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.088 [2024-07-15 23:51:28.968802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.088 [2024-07-15 23:51:28.969206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.088 [2024-07-15 23:51:28.969236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.088 [2024-07-15 23:51:28.969253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.088 [2024-07-15 23:51:28.969480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.088 [2024-07-15 23:51:28.969694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.088 [2024-07-15 23:51:28.969713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.088 [2024-07-15 23:51:28.969726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.088 [2024-07-15 23:51:28.972773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.088 [2024-07-15 23:51:28.982064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.088 [2024-07-15 23:51:28.982534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.088 [2024-07-15 23:51:28.982580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.088 [2024-07-15 23:51:28.982596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.088 [2024-07-15 23:51:28.982838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.088 [2024-07-15 23:51:28.983099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.088 [2024-07-15 23:51:28.983120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.088 [2024-07-15 23:51:28.983133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.088 [2024-07-15 23:51:28.986163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.088 [2024-07-15 23:51:28.995358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.088 [2024-07-15 23:51:28.995706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.088 [2024-07-15 23:51:28.995734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.088 [2024-07-15 23:51:28.995749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.088 [2024-07-15 23:51:28.995982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.088 [2024-07-15 23:51:28.996216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.088 [2024-07-15 23:51:28.996253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.088 [2024-07-15 23:51:28.996266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.088 [2024-07-15 23:51:28.999255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.088 [2024-07-15 23:51:29.008603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.088 [2024-07-15 23:51:29.009013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.009042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.009058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.009287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.009522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.009541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.009553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.012582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.021863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.022254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.022283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.022298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.022532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.022760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.022782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.022795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.026335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.035217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.035686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.035729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.035745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.036025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.036245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.036266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.036279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.039401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.048475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.048875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.048903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.048918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.049170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.049391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.049410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.049423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.052411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.061793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.062169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.062197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.062212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.062442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.062657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.062676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.062694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.065684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.075125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.075500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.075528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.075544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.075766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.076005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.076030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.076043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.079063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.088421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.088842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.088869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.088901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.089141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.089380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.089400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.089412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.092393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.101675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.102078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.102107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.102123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.102351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.102566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.102585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.102597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.105574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.114851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.115223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.115252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.115268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.115497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.115712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.115731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.115743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.118751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.128022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.128408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.128450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.128672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.128887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.128906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.128918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.131927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.141213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.141586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.089 [2024-07-15 23:51:29.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.089 [2024-07-15 23:51:29.141630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.089 [2024-07-15 23:51:29.141872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.089 [2024-07-15 23:51:29.142105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.089 [2024-07-15 23:51:29.142127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.089 [2024-07-15 23:51:29.142139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.089 [2024-07-15 23:51:29.145124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.089 [2024-07-15 23:51:29.154418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.089 [2024-07-15 23:51:29.154845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.090 [2024-07-15 23:51:29.154886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.090 [2024-07-15 23:51:29.154902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.090 [2024-07-15 23:51:29.155147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.090 [2024-07-15 23:51:29.155369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.090 [2024-07-15 23:51:29.155389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.090 [2024-07-15 23:51:29.155401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.090 [2024-07-15 23:51:29.158378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.090 [2024-07-15 23:51:29.167652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.090 [2024-07-15 23:51:29.168037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.090 [2024-07-15 23:51:29.168065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.090 [2024-07-15 23:51:29.168081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.090 [2024-07-15 23:51:29.168323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.090 [2024-07-15 23:51:29.168538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.090 [2024-07-15 23:51:29.168558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.090 [2024-07-15 23:51:29.168570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.090 [2024-07-15 23:51:29.171557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.090 [2024-07-15 23:51:29.180893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.090 [2024-07-15 23:51:29.181258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.090 [2024-07-15 23:51:29.181286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.090 [2024-07-15 23:51:29.181301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.090 [2024-07-15 23:51:29.181530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.090 [2024-07-15 23:51:29.181744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.090 [2024-07-15 23:51:29.181762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.090 [2024-07-15 23:51:29.181774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.090 [2024-07-15 23:51:29.184781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.090 [2024-07-15 23:51:29.194105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.090 [2024-07-15 23:51:29.194567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.090 [2024-07-15 23:51:29.194595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.090 [2024-07-15 23:51:29.194611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.090 [2024-07-15 23:51:29.194853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.090 [2024-07-15 23:51:29.195119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.090 [2024-07-15 23:51:29.195142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.090 [2024-07-15 23:51:29.195162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.090 [2024-07-15 23:51:29.198171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.090 [2024-07-15 23:51:29.207509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.090 [2024-07-15 23:51:29.207906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.090 [2024-07-15 23:51:29.207937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.090 [2024-07-15 23:51:29.207954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.090 [2024-07-15 23:51:29.208181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.090 [2024-07-15 23:51:29.208435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.090 [2024-07-15 23:51:29.208455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.090 [2024-07-15 23:51:29.208467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.211927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.220770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.221146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.221191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.221207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.221474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.221673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.221693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.221705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.224730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.234004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.234448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.234476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.234492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.234734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.234948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.234991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.235005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.238014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.247346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.247733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.247767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.247784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.248023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.248235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.248270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.248283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.251227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.260564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.261011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.261041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.261057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.261289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.261504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.261523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.261536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.264603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.273857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.274254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.274283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.274298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.274512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.274729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.274758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.274779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.278270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.287147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.287646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.287675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.287691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.287934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.288180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.288203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.288217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.291369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.300512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.300849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.300877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.300892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.301131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.301368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.301387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.301399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.304605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.313725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.314109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.314138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.314154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.314397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.314596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.314615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.314627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.317635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.326927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.327322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.327368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.327611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.327810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.327829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.327841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.330869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.340198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.340533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.340561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.340576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.340804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.341067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.341090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.341104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.344112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.353413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.353809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.353850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.353866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.354107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.354346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.354366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.354378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.357355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.366643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.367028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.367057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.367073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.350 [2024-07-15 23:51:29.367304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.350 [2024-07-15 23:51:29.367518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.350 [2024-07-15 23:51:29.367538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.350 [2024-07-15 23:51:29.367550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.350 [2024-07-15 23:51:29.370618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.350 [2024-07-15 23:51:29.379917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.350 [2024-07-15 23:51:29.380311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.350 [2024-07-15 23:51:29.380340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.350 [2024-07-15 23:51:29.380361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.380604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.380803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.380823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.380835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.383853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.393171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.393608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.393651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.393668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.393910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.394157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.394178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.394192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.397190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.406463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.406812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.406841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.406856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.407081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.407336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.407355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.407368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.410348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.419603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.420048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.420076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.420092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.420335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.420534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.420558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.420570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.423560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.432814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.433198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.433227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.433242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.433488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.433687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.433706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.433718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.436743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.446038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.446448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.446476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.446492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.446733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.446973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.446994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.447022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.450035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.351 [2024-07-15 23:51:29.459358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.351 [2024-07-15 23:51:29.459736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.351 [2024-07-15 23:51:29.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.351 [2024-07-15 23:51:29.459778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.351 [2024-07-15 23:51:29.460015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.351 [2024-07-15 23:51:29.460227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.351 [2024-07-15 23:51:29.460261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.351 [2024-07-15 23:51:29.460274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.351 [2024-07-15 23:51:29.463253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.611 [2024-07-15 23:51:29.473249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.611 [2024-07-15 23:51:29.473635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.611 [2024-07-15 23:51:29.473666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.611 [2024-07-15 23:51:29.473683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.611 [2024-07-15 23:51:29.473926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.611 [2024-07-15 23:51:29.474197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.611 [2024-07-15 23:51:29.474220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.611 [2024-07-15 23:51:29.474233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.611 [2024-07-15 23:51:29.477416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.611 [2024-07-15 23:51:29.486552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.611 [2024-07-15 23:51:29.486876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.611 [2024-07-15 23:51:29.486919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.611 [2024-07-15 23:51:29.486935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.611 [2024-07-15 23:51:29.487174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.611 [2024-07-15 23:51:29.487426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.611 [2024-07-15 23:51:29.487446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.611 [2024-07-15 23:51:29.487458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.611 [2024-07-15 23:51:29.490438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.611 [2024-07-15 23:51:29.499765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.611 [2024-07-15 23:51:29.500130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.611 [2024-07-15 23:51:29.500173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.611 [2024-07-15 23:51:29.500189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.611 [2024-07-15 23:51:29.500456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.611 [2024-07-15 23:51:29.500654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.611 [2024-07-15 23:51:29.500673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.611 [2024-07-15 23:51:29.500685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.611 [2024-07-15 23:51:29.503707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.611 [2024-07-15 23:51:29.513026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.611 [2024-07-15 23:51:29.513424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.611 [2024-07-15 23:51:29.513466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.611 [2024-07-15 23:51:29.513480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.611 [2024-07-15 23:51:29.513733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.611 [2024-07-15 23:51:29.513932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.513952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.513989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.516997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.526279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.526670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.526698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.526714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.526942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.527187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.527209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.527222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.530727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.539607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.539999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.540029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.540045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.540280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.540496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.540516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.540528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.543559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.553032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.553459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.553489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.553504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.553747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.553973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.553994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.554028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.557039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.566354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.566677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.566704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.566718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.566934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.567168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.567190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.567204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.570187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.579671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.580079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.580108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.580124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.580353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.580568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.580587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.580600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.583599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.592948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.593336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.593366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.593382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.593628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.593826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.593845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.593857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.596880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.606210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.606631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.606659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.606674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.606896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.607144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.607165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.607179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.610162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.619486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.619894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.619921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.619936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.620190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.620424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.620443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.620456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.623437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.632721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.633093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.633121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.633137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.633367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.633582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.633601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.633614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.636644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.646056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.646470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.646512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.646529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.646776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.647001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.612 [2024-07-15 23:51:29.647023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.612 [2024-07-15 23:51:29.647036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.612 [2024-07-15 23:51:29.650022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.612 [2024-07-15 23:51:29.659333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.612 [2024-07-15 23:51:29.659712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.612 [2024-07-15 23:51:29.659740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.612 [2024-07-15 23:51:29.659756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.612 [2024-07-15 23:51:29.660009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.612 [2024-07-15 23:51:29.660237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.660272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.660285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.663284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.613 [2024-07-15 23:51:29.672571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.613 [2024-07-15 23:51:29.672980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.613 [2024-07-15 23:51:29.673010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.613 [2024-07-15 23:51:29.673025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.613 [2024-07-15 23:51:29.673257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.613 [2024-07-15 23:51:29.673472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.673492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.673504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.676497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.613 [2024-07-15 23:51:29.685754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.613 [2024-07-15 23:51:29.686118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.613 [2024-07-15 23:51:29.686161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.613 [2024-07-15 23:51:29.686176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.613 [2024-07-15 23:51:29.686431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.613 [2024-07-15 23:51:29.686630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.686650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.686662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.689642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.613 [2024-07-15 23:51:29.698937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.613 [2024-07-15 23:51:29.699362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.613 [2024-07-15 23:51:29.699390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.613 [2024-07-15 23:51:29.699420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.613 [2024-07-15 23:51:29.699689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.613 [2024-07-15 23:51:29.699889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.699908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.699920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.702913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.613 [2024-07-15 23:51:29.712220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.613 [2024-07-15 23:51:29.712655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.613 [2024-07-15 23:51:29.712683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.613 [2024-07-15 23:51:29.712699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.613 [2024-07-15 23:51:29.712946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.613 [2024-07-15 23:51:29.713183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.713203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.713216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.716217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.613 [2024-07-15 23:51:29.725518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.613 [2024-07-15 23:51:29.725900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.613 [2024-07-15 23:51:29.725929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.613 [2024-07-15 23:51:29.725944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.613 [2024-07-15 23:51:29.726182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.613 [2024-07-15 23:51:29.726402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.613 [2024-07-15 23:51:29.726422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.613 [2024-07-15 23:51:29.726434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.613 [2024-07-15 23:51:29.729433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.873 [2024-07-15 23:51:29.738811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.873 [2024-07-15 23:51:29.739234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.873 [2024-07-15 23:51:29.739269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.873 [2024-07-15 23:51:29.739287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.873 [2024-07-15 23:51:29.739527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.873 [2024-07-15 23:51:29.739749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.873 [2024-07-15 23:51:29.739772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.873 [2024-07-15 23:51:29.739796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.873 [2024-07-15 23:51:29.742953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.873 [2024-07-15 23:51:29.752177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.873 [2024-07-15 23:51:29.752578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.873 [2024-07-15 23:51:29.752607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.873 [2024-07-15 23:51:29.752624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.873 [2024-07-15 23:51:29.752866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.873 [2024-07-15 23:51:29.753094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.873 [2024-07-15 23:51:29.753116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.873 [2024-07-15 23:51:29.753128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.873 [2024-07-15 23:51:29.756117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.873 [2024-07-15 23:51:29.765410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.873 [2024-07-15 23:51:29.765791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.873 [2024-07-15 23:51:29.765818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.873 [2024-07-15 23:51:29.765834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.873 [2024-07-15 23:51:29.766086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.873 [2024-07-15 23:51:29.766326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.873 [2024-07-15 23:51:29.766346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.873 [2024-07-15 23:51:29.766359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.873 [2024-07-15 23:51:29.769349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.873 [2024-07-15 23:51:29.778625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.873 [2024-07-15 23:51:29.778988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.873 [2024-07-15 23:51:29.779017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.873 [2024-07-15 23:51:29.779034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.873 [2024-07-15 23:51:29.779263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.873 [2024-07-15 23:51:29.779501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.873 [2024-07-15 23:51:29.779523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.873 [2024-07-15 23:51:29.779536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.873 [2024-07-15 23:51:29.783123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.873 [2024-07-15 23:51:29.791892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.873 [2024-07-15 23:51:29.792245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.873 [2024-07-15 23:51:29.792275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.873 [2024-07-15 23:51:29.792291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.873 [2024-07-15 23:51:29.792519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.873 [2024-07-15 23:51:29.792734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.873 [2024-07-15 23:51:29.792753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.873 [2024-07-15 23:51:29.792765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.873 [2024-07-15 23:51:29.795821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.805236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.805605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.805656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.805672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.805911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.806159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.806181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.806194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.809194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.818485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.818860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.818887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.818903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.819155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.819402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.819421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.819433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.822392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.831623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.831997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.832025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.832041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.832284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.832492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.832511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.832523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.835416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.844821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.845194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.845239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.845254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.845488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.845688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.845707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.845719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.848711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.858164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.858608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.858666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.858948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.859180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.859201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.859214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.862112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.871389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.871762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.871802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.871824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.872090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.872336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.872355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.872369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.875248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.884521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.884961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.885005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.885021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.885275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.885468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.885487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.885499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.888434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.874 [2024-07-15 23:51:29.897780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.874 [2024-07-15 23:51:29.898180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.874 [2024-07-15 23:51:29.898208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.874 [2024-07-15 23:51:29.898224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.874 [2024-07-15 23:51:29.898468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.874 [2024-07-15 23:51:29.898660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.874 [2024-07-15 23:51:29.898679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.874 [2024-07-15 23:51:29.898691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.874 [2024-07-15 23:51:29.901625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.911063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.911470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.911512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.911527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.911795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.912038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.912064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.912078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.915106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.924327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.924750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.924779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.924794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.925050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.925293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.925314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.925327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.928355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.937531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.938038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.938082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.938098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.938348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.938541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.938559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.938571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.941560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.950779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.951170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.951226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.951242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.951485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.951677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.951695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.951707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.954655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.963914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.964368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.964421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.964436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.964664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.964857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.964875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.964887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.967813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.977097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.977615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.977670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.977686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.977929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.978173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.978193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.978206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.981127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.875 [2024-07-15 23:51:29.990258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.875 [2024-07-15 23:51:29.990640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.875 [2024-07-15 23:51:29.990730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:54.875 [2024-07-15 23:51:29.990745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:54.875 [2024-07-15 23:51:29.991005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:54.875 [2024-07-15 23:51:29.991210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.875 [2024-07-15 23:51:29.991245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.875 [2024-07-15 23:51:29.991257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.875 [2024-07-15 23:51:29.994645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.004080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.004570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.004614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.004632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.004871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.005112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.005135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.005148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.135 [2024-07-15 23:51:30.009479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.017502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.017867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.017897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.017913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.018154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.018377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.018397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.018409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.135 [2024-07-15 23:51:30.021574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.031024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.031422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.031471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.031488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.031717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.031929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.031950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.031976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.135 [2024-07-15 23:51:30.035566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.044440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.044898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.044949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.044976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.045232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.045463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.045483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.045501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.135 [2024-07-15 23:51:30.048541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.057629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.058032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.058061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.058078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.058332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.058526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.058545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.058557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.135 [2024-07-15 23:51:30.061500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.135 [2024-07-15 23:51:30.071206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.135 [2024-07-15 23:51:30.071643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.135 [2024-07-15 23:51:30.071694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.135 [2024-07-15 23:51:30.071711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.135 [2024-07-15 23:51:30.072007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.135 [2024-07-15 23:51:30.072227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.135 [2024-07-15 23:51:30.072248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.135 [2024-07-15 23:51:30.072277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.075376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.084603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.084999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.085037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.085053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.085268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.085499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.085517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.085529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.088584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.097814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.098219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.098270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.098286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.098536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.098728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.098747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.098759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.101773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.111091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.111559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.111614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.111629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.111873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.112094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.112130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.112144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.115037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.124148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.124522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.124550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.124565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.124799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.125050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.125071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.125084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.127987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.137295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.137671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.137698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.137713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.137947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.138182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.138203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.138215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.141029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.150362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.150705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.150732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.150747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.150972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.151191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.151212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.151225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.154139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.163515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.163855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.163882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.163896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.164162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.164410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.164429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.164441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.167336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.176503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.176877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.176904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.176920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.177185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.177413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.177432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.177443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.180346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.189665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.190010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.190037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.190052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.190267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.190476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.190494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.190506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.193327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.202739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.203120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.203147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.203162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.203377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.203584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.203603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.136 [2024-07-15 23:51:30.203614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.136 [2024-07-15 23:51:30.206436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.136 [2024-07-15 23:51:30.215786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.136 [2024-07-15 23:51:30.216171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.136 [2024-07-15 23:51:30.216199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.136 [2024-07-15 23:51:30.216214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.136 [2024-07-15 23:51:30.216449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.136 [2024-07-15 23:51:30.216658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.136 [2024-07-15 23:51:30.216677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.137 [2024-07-15 23:51:30.216689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.137 [2024-07-15 23:51:30.219585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.137 [2024-07-15 23:51:30.228857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.137 [2024-07-15 23:51:30.229253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.137 [2024-07-15 23:51:30.229281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.137 [2024-07-15 23:51:30.229316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.137 [2024-07-15 23:51:30.229545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.137 [2024-07-15 23:51:30.229738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.137 [2024-07-15 23:51:30.229756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.137 [2024-07-15 23:51:30.229768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.137 [2024-07-15 23:51:30.232666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.137 [2024-07-15 23:51:30.241911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.137 [2024-07-15 23:51:30.242290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.137 [2024-07-15 23:51:30.242318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.137 [2024-07-15 23:51:30.242333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.137 [2024-07-15 23:51:30.242568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.137 [2024-07-15 23:51:30.242777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.137 [2024-07-15 23:51:30.242796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.137 [2024-07-15 23:51:30.242808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.137 [2024-07-15 23:51:30.245733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3884383 Killed "${NVMF_APP[@]}" "$@" 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:55.137 [2024-07-15 23:51:30.255429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.137 [2024-07-15 23:51:30.255807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.137 [2024-07-15 23:51:30.255835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.137 [2024-07-15 23:51:30.255851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.137 [2024-07-15 23:51:30.256085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3885340 00:24:55.137 [2024-07-15 23:51:30.256321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.137 [2024-07-15 23:51:30.256344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.137 [2024-07-15 23:51:30.256357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3885340 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3885340 ']' 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.137 23:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:55.397 [2024-07-15 23:51:30.259737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.268790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.269239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.269283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.269299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.269533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.269726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.269745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.269757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.272713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.282131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.282584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.282639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.282654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.282899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.283156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.283178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.283193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.286761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.295445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.295822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.295851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.295866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.296106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.296347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.296370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.296382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.299308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.305624] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:24:55.397 [2024-07-15 23:51:30.305702] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.397 [2024-07-15 23:51:30.308773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.309178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.309206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.309222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.309455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.309649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.309668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.309680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.312669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.322225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.322588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.322615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.322630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.322846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.323081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.323102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.323115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.326017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.335340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.335737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.335764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.335780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.336039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.336244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.336279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.336296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.339179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.397 [2024-07-15 23:51:30.348728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.349125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.349154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.349171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.349399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.349613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.349632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.349644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.352617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.362046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.362429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.362457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.362472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.362694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.362907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.362927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.362953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.366025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.370889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.397 [2024-07-15 23:51:30.375414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.375818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.375847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.375864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.376098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.376318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.376339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.376352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.379380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.388690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.389199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.389236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.389255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.389503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.389704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.389723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.389738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.392813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.401952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.402409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.402437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.402453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.402674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.402889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.402909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.402921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.405945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.415314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.415702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.415731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.415747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.397 [2024-07-15 23:51:30.416001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.397 [2024-07-15 23:51:30.416228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.397 [2024-07-15 23:51:30.416249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.397 [2024-07-15 23:51:30.416262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.397 [2024-07-15 23:51:30.419277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.397 [2024-07-15 23:51:30.428655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.397 [2024-07-15 23:51:30.429168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.397 [2024-07-15 23:51:30.429203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.397 [2024-07-15 23:51:30.429230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.429489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.429691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.429710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.429724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.432749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.442074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.442538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.442572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.442590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.442831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.443077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.443098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.443113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.446139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.455310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.455696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.455742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.455996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.456224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.456245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.456259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.459274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.468572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.468897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.468924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.468940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.469204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.469426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.469458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.469471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.472483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.477494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.398 [2024-07-15 23:51:30.477540] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.398 [2024-07-15 23:51:30.477553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.398 [2024-07-15 23:51:30.477563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.398 [2024-07-15 23:51:30.477572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.398 [2024-07-15 23:51:30.477646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.398 [2024-07-15 23:51:30.477704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.398 [2024-07-15 23:51:30.477707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.398 [2024-07-15 23:51:30.482082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.482511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.482543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.482560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.482792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.483036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.483059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.483074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.486289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.495532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.496074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.496113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.496131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.496369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.496585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.496606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.496622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.499774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.398 [2024-07-15 23:51:30.509177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.398 [2024-07-15 23:51:30.509746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.398 [2024-07-15 23:51:30.509790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.398 [2024-07-15 23:51:30.509822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.398 [2024-07-15 23:51:30.510058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.398 [2024-07-15 23:51:30.510295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.398 [2024-07-15 23:51:30.510316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.398 [2024-07-15 23:51:30.510332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.398 [2024-07-15 23:51:30.513534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.522771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.523237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.523277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.523297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.523534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.523750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.523774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.523806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.527313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.536381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.536850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.536907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.537136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.537358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.537380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.537395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.540746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.549996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.550529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.550574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.550593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.550833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.551078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.551120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.551137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.554409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.563495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.563941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.563984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.564004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.564224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.564454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.564476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.564491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.567697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.576810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.577198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.577227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.577243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.577474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.577686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.577707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.577720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.580883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.590276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.590638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.590666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.590683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.590898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.591156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.591178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.591192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.594399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.603738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.604131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.604161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.604177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.604406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.604617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.604638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.604651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.607823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.617211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.617571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.617600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.617616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.617829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.618088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.618110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.618124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.621329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.630684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.631057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.631085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.631101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.631315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.631542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.631562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.631576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.634770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.644123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.644511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.644539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.644555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.644789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.645029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.659 [2024-07-15 23:51:30.645051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.659 [2024-07-15 23:51:30.645064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.659 [2024-07-15 23:51:30.648229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.659 [2024-07-15 23:51:30.657582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.659 [2024-07-15 23:51:30.657961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.659 [2024-07-15 23:51:30.657990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.659 [2024-07-15 23:51:30.658006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.659 [2024-07-15 23:51:30.658221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.659 [2024-07-15 23:51:30.658447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.658467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.658480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.661717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.671089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.671490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.671518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.671534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.671747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.672001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.672026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.672039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.675258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.684641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.685025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.685053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.685069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.685283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.685510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.685530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.685548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.688711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.698090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.698466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.698495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.698511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.698740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.698978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.699000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.699014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.702209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.711577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.711906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.711934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.711950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.712172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.712401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.712422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.712434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.715633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.724984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.725372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.725401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.725416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.725631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.725858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.725878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.725891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.729116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.738472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.738822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.738855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.738872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.739096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.739328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.739349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.739361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.742547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.751913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.752281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.752310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.752325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.752539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.752765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.752785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.752799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.755969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.765349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.765688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.765716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.765732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.765969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.766203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.766224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.766238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.660 [2024-07-15 23:51:30.769443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.660 [2024-07-15 23:51:30.779085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.660 [2024-07-15 23:51:30.779447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.660 [2024-07-15 23:51:30.779489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.660 [2024-07-15 23:51:30.779510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.660 [2024-07-15 23:51:30.779727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.660 [2024-07-15 23:51:30.779951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.660 [2024-07-15 23:51:30.779982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.660 [2024-07-15 23:51:30.779996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.920 [2024-07-15 23:51:30.783371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.920 [2024-07-15 23:51:30.792689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.920 [2024-07-15 23:51:30.793094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.920 [2024-07-15 23:51:30.793124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.920 [2024-07-15 23:51:30.793141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.920 [2024-07-15 23:51:30.793356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.920 [2024-07-15 23:51:30.793574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.920 [2024-07-15 23:51:30.793596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.920 [2024-07-15 23:51:30.793609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.920 [2024-07-15 23:51:30.796970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.920 [2024-07-15 23:51:30.806233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.920 [2024-07-15 23:51:30.806547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.920 [2024-07-15 23:51:30.806577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.920 [2024-07-15 23:51:30.806593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.920 [2024-07-15 23:51:30.806806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.920 [2024-07-15 23:51:30.807035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.920 [2024-07-15 23:51:30.807058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.920 [2024-07-15 23:51:30.807072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.920 [2024-07-15 23:51:30.810359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.819669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.820016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.820045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.820061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.820275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.820502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.820523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.820535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.823771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.833292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.833686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.833715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.833731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.833945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.834173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.834194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.834208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.837419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.846818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.847195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.847224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.847240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.847469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.847681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.847702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.847714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.850916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.860310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.860689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.860717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.860733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.860947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.861205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.861227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.861241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.864459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.873911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.874283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.874312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.874333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.874563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.874775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.874795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.874808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.877983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.887378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.887752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.887780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.887796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.888020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.888239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.888275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.888288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.891487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.900843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.901216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.901244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.901260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.901489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.901701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.901721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.901734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.904926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.914291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.914638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.914666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.914682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.914896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.915153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.915181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.915195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.918379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.927768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.928162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.928191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.928206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.928420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.921 [2024-07-15 23:51:30.928648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.921 [2024-07-15 23:51:30.928668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.921 [2024-07-15 23:51:30.928682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.921 [2024-07-15 23:51:30.931873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.921 [2024-07-15 23:51:30.941254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.921 [2024-07-15 23:51:30.941604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.921 [2024-07-15 23:51:30.941632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.921 [2024-07-15 23:51:30.941648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.921 [2024-07-15 23:51:30.941862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:30.942120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:30.942142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:30.942156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:30.945364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:30.954735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:30.955107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:30.955136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:30.955152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:30.955366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:30.955593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:30.955614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:30.955626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:30.958798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:30.968189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:30.968538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:30.968567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:30.968583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:30.968797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:30.969055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:30.969078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:30.969092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:30.972289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:30.981597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:30.981948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:30.981984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:30.982000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:30.982214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:30.982433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:30.982454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:30.982474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:30.985738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:30.995143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:30.995501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:30.995529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:30.995544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:30.995758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:30.995986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:30.996008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:30.996022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:30.999325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:31.008692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:31.009062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:31.009091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:31.009107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:31.009340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:31.009553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:31.009573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:31.009586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:31.012783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:31.022195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:31.022529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:31.022558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:31.022574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:31.022803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:31.023046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:31.023069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:31.023083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:31.026292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.922 [2024-07-15 23:51:31.035644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.922 [2024-07-15 23:51:31.036000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.922 [2024-07-15 23:51:31.036030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:55.922 [2024-07-15 23:51:31.036047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:55.922 [2024-07-15 23:51:31.036261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:55.922 [2024-07-15 23:51:31.036488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.922 [2024-07-15 23:51:31.036509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.922 [2024-07-15 23:51:31.036522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.922 [2024-07-15 23:51:31.039779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.182 [2024-07-15 23:51:31.049420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.182 [2024-07-15 23:51:31.049773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.182 [2024-07-15 23:51:31.049804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.182 [2024-07-15 23:51:31.049820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.182 [2024-07-15 23:51:31.050045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.182 [2024-07-15 23:51:31.050264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.182 [2024-07-15 23:51:31.050286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.182 [2024-07-15 23:51:31.050305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.182 [2024-07-15 23:51:31.053660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.182 [2024-07-15 23:51:31.062919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.182 [2024-07-15 23:51:31.063332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.182 [2024-07-15 23:51:31.063373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.182 [2024-07-15 23:51:31.063389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.182 [2024-07-15 23:51:31.063603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.182 [2024-07-15 23:51:31.063821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.182 [2024-07-15 23:51:31.063842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.182 [2024-07-15 23:51:31.063856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.182 [2024-07-15 23:51:31.067112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.182 [2024-07-15 23:51:31.076606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.182 [2024-07-15 23:51:31.076987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.182 [2024-07-15 23:51:31.077017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.182 [2024-07-15 23:51:31.077034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.182 [2024-07-15 23:51:31.077249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.182 [2024-07-15 23:51:31.077473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.182 [2024-07-15 23:51:31.077494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.182 [2024-07-15 23:51:31.077507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.182 [2024-07-15 23:51:31.080876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.182 [2024-07-15 23:51:31.090217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.182 [2024-07-15 23:51:31.090553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.182 [2024-07-15 23:51:31.090582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.090598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.090813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.091042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.091065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.091078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.094314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.103807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.104189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.104218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.104234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.104448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.104675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.104696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.104709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.107921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.117450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.117792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.117821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.117837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.118063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.118297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.118318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.118331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.121567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.131045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.131416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.131445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.131462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.131692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.131903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.131924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.131951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.135170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.144539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.144889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.144918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.144934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.145173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.145405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.145426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.145439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.148609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.158043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.158423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.158452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.158469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.158697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.158908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.158929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.158967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.162180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.171580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.171923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.171970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.171989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.172203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.172437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.172459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.172472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.175635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.184923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.185303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.185331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.185348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.185561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.185789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.185809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.185827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.189057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.198453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.198803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.198832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.198848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.199070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.199289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.199325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.199338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.202461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.212034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.212351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.212394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.212410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.212624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.212853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.212874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.212888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.216138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.225517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.225891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.225920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.183 [2024-07-15 23:51:31.225936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.183 [2024-07-15 23:51:31.226159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.183 [2024-07-15 23:51:31.226391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.183 [2024-07-15 23:51:31.226413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.183 [2024-07-15 23:51:31.226425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.183 [2024-07-15 23:51:31.229614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.183 [2024-07-15 23:51:31.239095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.183 [2024-07-15 23:51:31.239429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.183 [2024-07-15 23:51:31.239464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.184 [2024-07-15 23:51:31.239480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.184 [2024-07-15 23:51:31.239694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.184 [2024-07-15 23:51:31.239912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.184 [2024-07-15 23:51:31.239933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.184 [2024-07-15 23:51:31.239946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.184 [2024-07-15 23:51:31.243198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.184 [2024-07-15 23:51:31.252591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.184 [2024-07-15 23:51:31.252921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.184 [2024-07-15 23:51:31.252950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.184 [2024-07-15 23:51:31.252976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.184 [2024-07-15 23:51:31.253192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.184 [2024-07-15 23:51:31.253421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.184 [2024-07-15 23:51:31.253442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.184 [2024-07-15 23:51:31.253454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.184 [2024-07-15 23:51:31.256677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.184 [2024-07-15 23:51:31.266205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.184 [2024-07-15 23:51:31.266561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.184 [2024-07-15 23:51:31.266590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.184 [2024-07-15 23:51:31.266606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.184 [2024-07-15 23:51:31.266819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.184 [2024-07-15 23:51:31.267076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.184 [2024-07-15 23:51:31.267098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.184 [2024-07-15 23:51:31.267112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.184 [2024-07-15 23:51:31.270386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.184 [2024-07-15 23:51:31.270730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.184 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.184 [2024-07-15 23:51:31.279801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.184 [2024-07-15 23:51:31.280191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.184 [2024-07-15 23:51:31.280219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.184 [2024-07-15 23:51:31.280246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.184 [2024-07-15 23:51:31.280474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.184 [2024-07-15 23:51:31.280686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.184 [2024-07-15 23:51:31.280706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.184 [2024-07-15 23:51:31.280719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.184 [2024-07-15 23:51:31.283869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.184 [2024-07-15 23:51:31.293392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.184 [2024-07-15 23:51:31.293819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.184 [2024-07-15 23:51:31.293852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.184 [2024-07-15 23:51:31.293871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.184 [2024-07-15 23:51:31.294099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.184 [2024-07-15 23:51:31.294337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.184 [2024-07-15 23:51:31.294358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.184 [2024-07-15 23:51:31.294373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.184 [2024-07-15 23:51:31.297651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.443 Malloc0 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.443 [2024-07-15 23:51:31.307191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.443 [2024-07-15 23:51:31.307599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.443 [2024-07-15 23:51:31.307635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.443 [2024-07-15 23:51:31.307654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.443 [2024-07-15 23:51:31.307874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.443 [2024-07-15 23:51:31.308112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.443 [2024-07-15 23:51:31.308135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.443 [2024-07-15 23:51:31.308151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.443 [2024-07-15 23:51:31.311645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.443 [2024-07-15 23:51:31.320687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.443 [2024-07-15 23:51:31.321076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.443 [2024-07-15 23:51:31.321106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e03ac0 with addr=10.0.0.2, port=4420 00:24:56.443 [2024-07-15 23:51:31.321123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ac0 is same with the state(5) to be set 00:24:56.443 [2024-07-15 23:51:31.321338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03ac0 (9): Bad file descriptor 00:24:56.443 [2024-07-15 23:51:31.321557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.443 [2024-07-15 23:51:31.321578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.443 [2024-07-15 23:51:31.321591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.443 [2024-07-15 23:51:31.324875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.443 [2024-07-15 23:51:31.325616] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.443 23:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3884557 00:24:56.443 [2024-07-15 23:51:31.334288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.443 [2024-07-15 23:51:31.364046] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.416 00:25:06.416 Latency(us) 00:25:06.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.416 Verification LBA range: start 0x0 length 0x4000 00:25:06.416 Nvme1n1 : 15.01 6283.13 24.54 11875.07 0.00 7025.72 837.40 15340.28 00:25:06.416 =================================================================================================================== 00:25:06.416 Total : 6283.13 24.54 11875.07 0.00 7025.72 837.40 15340.28 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:06.416 rmmod nvme_tcp 00:25:06.416 rmmod nvme_fabrics 00:25:06.416 rmmod nvme_keyring 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3885340 ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3885340 ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885340' 00:25:06.416 killing process with pid 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3885340 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.416 23:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.354 23:51:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.354 00:25:07.354 real 0m22.584s 00:25:07.354 user 1m0.675s 00:25:07.354 sys 0m4.283s 00:25:07.354 23:51:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.354 23:51:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.354 ************************************ 00:25:07.354 END TEST nvmf_bdevperf 00:25:07.354 ************************************ 00:25:07.354 23:51:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:07.354 23:51:42 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:07.354 23:51:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:07.354 23:51:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.354 23:51:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:07.611 ************************************ 00:25:07.611 START TEST nvmf_target_disconnect 00:25:07.611 ************************************ 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:07.611 * Looking for test storage... 00:25:07.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.611 23:51:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:07.612 23:51:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:09.508 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:09.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:09.508 Found net devices under 0000:09:00.0: cvl_0_0 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:09.508 Found net devices under 0000:09:00.1: cvl_0_1 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:09.508 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.509 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.509 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:09.509 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:09.509 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.509 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:09.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:25:09.766 00:25:09.766 --- 10.0.0.2 ping statistics --- 00:25:09.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.766 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:09.766 00:25:09.766 --- 10.0.0.1 ping statistics --- 00:25:09.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.766 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:09.766 ************************************ 00:25:09.766 START TEST nvmf_target_disconnect_tc1 00:25:09.766 ************************************ 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.766 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.766 [2024-07-15 23:51:44.860677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.766 [2024-07-15 23:51:44.860741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec11a0 with addr=10.0.0.2, port=4420 00:25:09.766 [2024-07-15 23:51:44.860775] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:09.766 [2024-07-15 23:51:44.860793] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.766 [2024-07-15 23:51:44.860806] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:09.766 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:09.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:09.766 Initializing NVMe Controllers 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:09.766 00:25:09.766 real 0m0.087s 00:25:09.766 user 0m0.036s 00:25:09.766 sys 0m0.050s 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:09.766 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:09.766 ************************************ 00:25:09.766 END TEST nvmf_target_disconnect_tc1 00:25:09.766 ************************************ 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:10.023 ************************************ 00:25:10.023 START TEST nvmf_target_disconnect_tc2 00:25:10.023 ************************************ 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3888490 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3888490 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3888490 ']' 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.023 23:51:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.023 [2024-07-15 23:51:44.967640] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:25:10.023 [2024-07-15 23:51:44.967727] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.023 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.023 [2024-07-15 23:51:45.031739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.023 [2024-07-15 23:51:45.140604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.023 [2024-07-15 23:51:45.140666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.023 [2024-07-15 23:51:45.140694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.023 [2024-07-15 23:51:45.140705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.023 [2024-07-15 23:51:45.140714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.023 [2024-07-15 23:51:45.140798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:10.023 [2024-07-15 23:51:45.140829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:10.023 [2024-07-15 23:51:45.140884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:10.023 [2024-07-15 23:51:45.140886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 Malloc0 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 [2024-07-15 23:51:45.336826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 [2024-07-15 23:51:45.365115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3888519 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:10.280 23:51:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.536 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.450 23:51:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3888490 00:25:12.450 23:51:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 [2024-07-15 23:51:47.390563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 [2024-07-15 23:51:47.390875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Write completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.450 Read completed with error (sct=0, sc=8) 00:25:12.450 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 [2024-07-15 23:51:47.391189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Read completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 Write completed with error (sct=0, sc=8) 00:25:12.451 starting I/O failed 00:25:12.451 [2024-07-15 23:51:47.391477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:12.451 [2024-07-15 23:51:47.391644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.391685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.391848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.391876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.391994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.392895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.392928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.393082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.393214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.393374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.393692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.393817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.393973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.394947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.394996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.395857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.395884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.396948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.396979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.397101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.397126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.397226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.397252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.397378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.397404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.397556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.397581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.451 [2024-07-15 23:51:47.397705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.451 [2024-07-15 23:51:47.397732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.451 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.397820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.397846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.397933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.397964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.398894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.398920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.399964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.399990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.400856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.400994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.401859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.401885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.402829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.402971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.403010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.403121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.403148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.403242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.403268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.403363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.452 [2024-07-15 23:51:47.403389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.452 qpair failed and we were unable to recover it. 00:25:12.452 [2024-07-15 23:51:47.403512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.403538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.403637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.403663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.403823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.403851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.403968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.404023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.405917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.405943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.406914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.406941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.407961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.407989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.408923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.408963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.409944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.409978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.410098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.410124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.410246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.410272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.453 [2024-07-15 23:51:47.410364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.453 [2024-07-15 23:51:47.410390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.453 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.410481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.410508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.410654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.410680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.410778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.410804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.410934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.410965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.411932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.411963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.412972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.412999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.413852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.413997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.414023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.414148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.414173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.414264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.414290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.414418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.454 [2024-07-15 23:51:47.414443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.454 qpair failed and we were unable to recover it. 00:25:12.454 [2024-07-15 23:51:47.414537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.414564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.414659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.414689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.414784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.414811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.414913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.414939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.415942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.415979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.416901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.416926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.417875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.418930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.418965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.419248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.419363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.419514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.455 [2024-07-15 23:51:47.419755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.455 qpair failed and we were unable to recover it. 00:25:12.455 [2024-07-15 23:51:47.419854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.419880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.420908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.420935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.421835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.421874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.422933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.422979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.423818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.423857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.424899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.424939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.425082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.456 [2024-07-15 23:51:47.425110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.456 qpair failed and we were unable to recover it. 00:25:12.456 [2024-07-15 23:51:47.425233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.425259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.425479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.425533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.425737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.425791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.425940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.425975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.426965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.426993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.427938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.427978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.428128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.428279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.428546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.428668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.428837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.428984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.429921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.429947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.430906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.430933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.431039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.431068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.431175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.431202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.431393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.431622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.457 [2024-07-15 23:51:47.431648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.457 qpair failed and we were unable to recover it. 00:25:12.457 [2024-07-15 23:51:47.431746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.431772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.431869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.431897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.431999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.432868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.432994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.433143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.433367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.433639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.433789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.433939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.433971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.434885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.434911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.435838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.435985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.436966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.436993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.437127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.437157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.437259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.437287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.437413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.458 [2024-07-15 23:51:47.437440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.458 qpair failed and we were unable to recover it. 00:25:12.458 [2024-07-15 23:51:47.437539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.437565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.437717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.437742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.437864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.437890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.438858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.438898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.439934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.439968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.440921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.440946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.441934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.441979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.442089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.442116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.442273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.442323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.442463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.442517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.442695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.442721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.442870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.442896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.443013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.443040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.443160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.459 [2024-07-15 23:51:47.443185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.459 qpair failed and we were unable to recover it. 00:25:12.459 [2024-07-15 23:51:47.443369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.443423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.443607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.443662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.443785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.443810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.443935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.443979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.444102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.444219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.444431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.444689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.444840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.444977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.445882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.445908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.446861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.446889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.447925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.447973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.448910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.448936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.449069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.449097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.449196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.449221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.449311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.449336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.449465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.460 qpair failed and we were unable to recover it. 00:25:12.460 [2024-07-15 23:51:47.449585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.460 [2024-07-15 23:51:47.449610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.449769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.449795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.449897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.449925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.450880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.450907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.451880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.451908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.452868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.453883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.453927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.454905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.454944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.455136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.455271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.455421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.455567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.461 [2024-07-15 23:51:47.455722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.461 qpair failed and we were unable to recover it. 00:25:12.461 [2024-07-15 23:51:47.455846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.455872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.455992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.456883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.456908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.457925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.457963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.458892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.458921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.459926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.459954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.460819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.460976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.461911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.461938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.462 [2024-07-15 23:51:47.462869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.462 [2024-07-15 23:51:47.462895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.462 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.462992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.463938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.463972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.464927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.464972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.465076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.465254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.465280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.465435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.465484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.465660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.465714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.465860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.465885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.466909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.466935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.467072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.467110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.467277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.467328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.467538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.467590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.467772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.467830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.467931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.467962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.468063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.468088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.468234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.468260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.468453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.468677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.468707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.468832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.468859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.469007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.469033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.469163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.469189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.463 [2024-07-15 23:51:47.469307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.463 [2024-07-15 23:51:47.469333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.463 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.469458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.469484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.469609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.469634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.469758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.469784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.469885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.470886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.470911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.471063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.471267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.471448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.471701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.471853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.471978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.472882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.472978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.473895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.473921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.474958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.474985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.475093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.475133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.475329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.475392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.475493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.475522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.475695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.475749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.475870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.475896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.464 qpair failed and we were unable to recover it. 00:25:12.464 [2024-07-15 23:51:47.476794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.464 [2024-07-15 23:51:47.476821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.476909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.476937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.477930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.477961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.478907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.478933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.479913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.479952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.480869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.480996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.481849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.481888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.482876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.482973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.483889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.483918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.484056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.465 [2024-07-15 23:51:47.484085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.465 qpair failed and we were unable to recover it. 00:25:12.465 [2024-07-15 23:51:47.484186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.484213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.484345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.484395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.484522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.484573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.484725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.484778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.484872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.484901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.485883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.485910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.486929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.486975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.487823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.487848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.488872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.488899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.489866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.489990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.490142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.490287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.490486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.490727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.490870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.490896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.491016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.466 [2024-07-15 23:51:47.491042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.466 qpair failed and we were unable to recover it. 00:25:12.466 [2024-07-15 23:51:47.491142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.491286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.491416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.491558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.491767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.491896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.491935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.492890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.492917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.493890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.493916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.494920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.494946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.495926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.495975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.496082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.496109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.496303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.496363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.496536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.496562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.496706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.496757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.496879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.496905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.497893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.497932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.467 [2024-07-15 23:51:47.498908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.467 [2024-07-15 23:51:47.498933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.467 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.499848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.499887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.500876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.500915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.501081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.501109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.501229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.501255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.501444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.501494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.501689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.501750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.501900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.501928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.502952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.502987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.503845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.503973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.504934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.504966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.505927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.505978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.468 qpair failed and we were unable to recover it. 00:25:12.468 [2024-07-15 23:51:47.506819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.468 [2024-07-15 23:51:47.506858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.506981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.507971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.507997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.508919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.508945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.509890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.510892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.510986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.511924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.511949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.512906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.512932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.513935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.513968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.514059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.514085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.514203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.514229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.469 [2024-07-15 23:51:47.514318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.469 [2024-07-15 23:51:47.514344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.469 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.514450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.514476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.514593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.514632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.514791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.514819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.514918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.515855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.515993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.516179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.516344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.516536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.516765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.516910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.516936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.517910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.517938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.518912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.518937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.519876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.519985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.520899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.520925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.521058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.521210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.521359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.521505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.470 [2024-07-15 23:51:47.521627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.470 qpair failed and we were unable to recover it. 00:25:12.470 [2024-07-15 23:51:47.521790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.521829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.521936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.521969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.522879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.522906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.523918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.523949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.524935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.524968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.525917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.525944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.526951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.526983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.527847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.527972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.528865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.528983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.529010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.529102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.529129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.529257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.529283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.471 [2024-07-15 23:51:47.529437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.471 [2024-07-15 23:51:47.529488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.471 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.529613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.529644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.529772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.529799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.529964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.530832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.530981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.531906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.531934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.532938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.532971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.533868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.533894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.534947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.534978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.535830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.535976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.536970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.536997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.537909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.537935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.472 [2024-07-15 23:51:47.538067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.472 [2024-07-15 23:51:47.538096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.472 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.538249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.538422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.538550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.538728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.538880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.538978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.539904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.539935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.540882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.540907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.541878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.541916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.542881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.542980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.543854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.543995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.544879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.544907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.545854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.473 [2024-07-15 23:51:47.545977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.473 [2024-07-15 23:51:47.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.473 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.546853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.546977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.547849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.547979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.548859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.548980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.549177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.549354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.549480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.549686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.549860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.549886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.550888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.550914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.551906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.551932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.552922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.552950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.553879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.553905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.474 [2024-07-15 23:51:47.554664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.474 [2024-07-15 23:51:47.554690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.474 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.554811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.554838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.554983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.555910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.555936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.556885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.556911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.557856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.557988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.558857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.558977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.559004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.559098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.559125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.559221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.559248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.559343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.475 [2024-07-15 23:51:47.559368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.475 qpair failed and we were unable to recover it. 00:25:12.475 [2024-07-15 23:51:47.559488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.782 [2024-07-15 23:51:47.559514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.782 qpair failed and we were unable to recover it. 00:25:12.782 [2024-07-15 23:51:47.559611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.559638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.559759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.559785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.559882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.559909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.560905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.560932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.561965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.561991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.562865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.562973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.563873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.563917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.783 qpair failed and we were unable to recover it. 00:25:12.783 [2024-07-15 23:51:47.564721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.783 [2024-07-15 23:51:47.564746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.564862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.564902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.565856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.565882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.566888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.566915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.567882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.567911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.568961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.568988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.569922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.569962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.570093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.570118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.570218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.570346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.570372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.570498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.570523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.784 qpair failed and we were unable to recover it. 00:25:12.784 [2024-07-15 23:51:47.570645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.784 [2024-07-15 23:51:47.570671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.570768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.570793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.570916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.570943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.571845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.571870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.572888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.572983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.573881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.573912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.574879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.574905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.575835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.575874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.576028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.576056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.576165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.576204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.576330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.576357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.576485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.576528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.785 [2024-07-15 23:51:47.576697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.785 [2024-07-15 23:51:47.576746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.785 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.576849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.576877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.577877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.577903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.578074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.578223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.578426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.578611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.578808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.578967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.579113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.579289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.579438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.579669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.579846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.579872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.580878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.580973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.581866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.581976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.582927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.582952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.786 [2024-07-15 23:51:47.583112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.786 [2024-07-15 23:51:47.583152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.786 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.583953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.583987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.584860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.584900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.585915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.585942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.586906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.586934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.587858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.587885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.588037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.787 [2024-07-15 23:51:47.588064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.787 qpair failed and we were unable to recover it. 00:25:12.787 [2024-07-15 23:51:47.588169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.588941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.588977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.589909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.589936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.590897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.590924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.591888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.591914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.592864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.592987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.593899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.593992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.788 [2024-07-15 23:51:47.594019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.788 qpair failed and we were unable to recover it. 00:25:12.788 [2024-07-15 23:51:47.594229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.594255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.594353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.594385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.594544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.594594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.594688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.594715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.594834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.594860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.594986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.595913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.595952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.596093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.596266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.596478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.596844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.596976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.597939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.597974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.598902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.598929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.599906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.599932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.600047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.789 [2024-07-15 23:51:47.600086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.789 qpair failed and we were unable to recover it. 00:25:12.789 [2024-07-15 23:51:47.600222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.600250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.600346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.600373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.600473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.600499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.600649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.600679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.600824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.600863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.600981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.601951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.601986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.602964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.602991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.603883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.603909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.604948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.604999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.605140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.605291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.605515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.605720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.605866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.605994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.606020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.606116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.606142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.606265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.606292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.790 [2024-07-15 23:51:47.606383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.790 [2024-07-15 23:51:47.606410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.790 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.606529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.606555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.606688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.606714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.606828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.606854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.607950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.607985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.608909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.608935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.609865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.609891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.610887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.610913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.611030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.611069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.611203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.611230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.791 qpair failed and we were unable to recover it. 00:25:12.791 [2024-07-15 23:51:47.611336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.791 [2024-07-15 23:51:47.611361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.611482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.611508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.611605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.611631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.611729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.611754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.611872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.611898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.612805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.612965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.613859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.613986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.614902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.614930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.615932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.615966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.616937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.616975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.617102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.617128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.617231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.617257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.617382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.792 [2024-07-15 23:51:47.617409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.792 qpair failed and we were unable to recover it. 00:25:12.792 [2024-07-15 23:51:47.617508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.617534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.617628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.617654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.617782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.617811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.617937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.617972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.618952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.618984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.619833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.619859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.620923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.620948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.621916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.621941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.622932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.622964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.623069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.623095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.623192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.623218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.793 qpair failed and we were unable to recover it. 00:25:12.793 [2024-07-15 23:51:47.623340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.793 [2024-07-15 23:51:47.623366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.623482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.623507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.623634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.623661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.623794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.623919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.623947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.624953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.624988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.625970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.625997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.626891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.626920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.627887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.627913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.794 [2024-07-15 23:51:47.628937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.794 [2024-07-15 23:51:47.628987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.794 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.629848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.629973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.630784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.630823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.631977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.632941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.632972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.633932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.633967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.634965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.634991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.635091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.635116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.795 [2024-07-15 23:51:47.635233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.795 [2024-07-15 23:51:47.635259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.795 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.635406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.635431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.635527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.635554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.635650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.635676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.635802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.635829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.635945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.635979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.636969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.636996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.637884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.637909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.638852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.638982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.639842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.639978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.640130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.796 [2024-07-15 23:51:47.640159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.796 qpair failed and we were unable to recover it. 00:25:12.796 [2024-07-15 23:51:47.640265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.640292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.640414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.640440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.640542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.640568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.640693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.640720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.640842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.640870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.641898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.641925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.642884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.642911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.643846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.643872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.644864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.644985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.797 qpair failed and we were unable to recover it. 00:25:12.797 [2024-07-15 23:51:47.645967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.797 [2024-07-15 23:51:47.645994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.646940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.646974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.647877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.647999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.648850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.648967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.649929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.649960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.650823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.650976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.651002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.651096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.651122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.651223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.651254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.651347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.651374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.651504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.798 [2024-07-15 23:51:47.651530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.798 qpair failed and we were unable to recover it. 00:25:12.798 [2024-07-15 23:51:47.651636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.651662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.651781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.651807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.651909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.651937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.652867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.652893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.653887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.653913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.654952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.654984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.655882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.655908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.656856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.656980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.657007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.799 qpair failed and we were unable to recover it. 00:25:12.799 [2024-07-15 23:51:47.657102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.799 [2024-07-15 23:51:47.657128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.657275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.657441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.657576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.657749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.657997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.658872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.658899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.659915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.659941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.660984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.661950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.661981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.662083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.662108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.662201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.662233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.800 [2024-07-15 23:51:47.662383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.800 [2024-07-15 23:51:47.662409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.800 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.662530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.662661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.662687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.662807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.662833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.662934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.662977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.663963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.663990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.664889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.665882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.665988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.666892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.666985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.801 qpair failed and we were unable to recover it. 00:25:12.801 [2024-07-15 23:51:47.667946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.801 [2024-07-15 23:51:47.667978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.668867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.668987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.669884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.669979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.670932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.670967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.671967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.671994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.672125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.672250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.672371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.672522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.672681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.672820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b50e0 is same with the state(5) to be set 00:25:12.802 [2024-07-15 23:51:47.673011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.673050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.673163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.673191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.673309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.673335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.673437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.673463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.673632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.802 [2024-07-15 23:51:47.673670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.802 qpair failed and we were unable to recover it. 00:25:12.802 [2024-07-15 23:51:47.673830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.673867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.674849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.674977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.675852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.675974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.676964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.676991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.677879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.677905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.678902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.678928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.803 qpair failed and we were unable to recover it. 00:25:12.803 [2024-07-15 23:51:47.679727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.803 [2024-07-15 23:51:47.679754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.679848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.679875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.679979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.680929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.680962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.681908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.681947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.682885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.682911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.683963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.683990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.684075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.684201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.684228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.684350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.684377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.804 [2024-07-15 23:51:47.684494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.804 [2024-07-15 23:51:47.684520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.804 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.684674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.684711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.684838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.684865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.684967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.684995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.685945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.685980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.686946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.686978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.687104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.687130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.687250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.687297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.687462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.687500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.687718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.687755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.687911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.687948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.688150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.688176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.688304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.688349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.688485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.688523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.688712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.688749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.688902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.688929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.689878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.689904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.690890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.691090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.691116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.691219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.805 [2024-07-15 23:51:47.691246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.805 qpair failed and we were unable to recover it. 00:25:12.805 [2024-07-15 23:51:47.691376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.691415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.691577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.691614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.691853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.691917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.692131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.692157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.692317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.692354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.692533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.692578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.692800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.692837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.692986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.693132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.693279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.693475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.693707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.693896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.693922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.694050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.694180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.694381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.694586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.694784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.694975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.695192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.695415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.695623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.695797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.695945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.695979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.696946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.696979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.697905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.697931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.698087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.698135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.698295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.698343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.698492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.698539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.698664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.698691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.806 qpair failed and we were unable to recover it. 00:25:12.806 [2024-07-15 23:51:47.698786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.806 [2024-07-15 23:51:47.698811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.698939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.698973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.699953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.699986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.700900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.700925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.701882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.701909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.702071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.702258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.702425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.702634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.702821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.702976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.703119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.703277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.703477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.703673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.703819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.703860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.704916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.704949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.705098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.807 [2024-07-15 23:51:47.705125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.807 qpair failed and we were unable to recover it. 00:25:12.807 [2024-07-15 23:51:47.705221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.705248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.705354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.705381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.705531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.705557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.705780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.705812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.705929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.705973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.706170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.706352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.706498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.706682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.706855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.706973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.707167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.707386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.707548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.707710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.707876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.707903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.708874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.708996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.709830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.709979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.710972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.711120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.711318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.711498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.711709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.711866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.808 [2024-07-15 23:51:47.711892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.808 qpair failed and we were unable to recover it. 00:25:12.808 [2024-07-15 23:51:47.712011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.712859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.712977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.713971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.713998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.714966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.714998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.715869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.715972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.716915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.716942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.717082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.717113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.717248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.717278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.809 qpair failed and we were unable to recover it. 00:25:12.809 [2024-07-15 23:51:47.717411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.809 [2024-07-15 23:51:47.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.717547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.717577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.717710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.717740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.717903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.717929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.718865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.718978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.719925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.719951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.720883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.720977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.721871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.721998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.722947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.722984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.723103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.723129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.723252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.723428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.723458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.723591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.723620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.810 [2024-07-15 23:51:47.723749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.810 [2024-07-15 23:51:47.723778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.810 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.723887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.723917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.724907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.724936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.725838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.725984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.726931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.726967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.727866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.727892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.728919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.728947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.729860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.729886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.730009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.730037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.811 qpair failed and we were unable to recover it. 00:25:12.811 [2024-07-15 23:51:47.730177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.811 [2024-07-15 23:51:47.730204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.730331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.730358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.730484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.730513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.730610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.730638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.730750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.730777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.730880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.730909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.731882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.731908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.732931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.732965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.733879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.733905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.734963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.734990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.735883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.735911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.736040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.812 [2024-07-15 23:51:47.736067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.812 qpair failed and we were unable to recover it. 00:25:12.812 [2024-07-15 23:51:47.736198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.736352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.736500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.736636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.736805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.736938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.736971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.737970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.737997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.738962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.738990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.739953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.739987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.740868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.740894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.813 [2024-07-15 23:51:47.741016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.813 [2024-07-15 23:51:47.741042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.813 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.741880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.741989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.742893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.742921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.743952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.743984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.744863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.744984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.745916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.745943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.814 [2024-07-15 23:51:47.746844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.814 qpair failed and we were unable to recover it. 00:25:12.814 [2024-07-15 23:51:47.746939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.746973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.747896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.747923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.748893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.748919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.749868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.749970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.750952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.750984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.751945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.751979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.815 [2024-07-15 23:51:47.752763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.815 [2024-07-15 23:51:47.752790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.815 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.752917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.752943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.753861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.753991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.754841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.755935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.756910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.756937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.757891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.757919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.816 [2024-07-15 23:51:47.758839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.816 [2024-07-15 23:51:47.758867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.816 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.758963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.759850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.759974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.760891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.760918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.761867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.761968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.762949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.762982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.763106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.763133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.763267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.763293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.763390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.763418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.817 [2024-07-15 23:51:47.763523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.817 [2024-07-15 23:51:47.763549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.817 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.763694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.763720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.763815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.763842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.763940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.763973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.764892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.764931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.765860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.765887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.766857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.766895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.767864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.767994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.768857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.768982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.818 [2024-07-15 23:51:47.769851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.818 [2024-07-15 23:51:47.769890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.818 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.769987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.770851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.770980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.771849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.771998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.772915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.772941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.773928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.773976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.774162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.774318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.774450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.774689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.774853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.774975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.775932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.775964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.776091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.776117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.776237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.776262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.776458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.776484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.819 [2024-07-15 23:51:47.776651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.819 [2024-07-15 23:51:47.776686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.819 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.776852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.777887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.777914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.778894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.778921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.779842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.779883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.780880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.780907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.781864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.781894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.782858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.782972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.783002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.783113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.783146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.783244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.783270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.783361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.783388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.820 [2024-07-15 23:51:47.783481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.820 [2024-07-15 23:51:47.783507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.820 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.783602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.783628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.783722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.783748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.783863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.783890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.783991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.784932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.784963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.785929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.785975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.786889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.786916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.787875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.787974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.788002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.788102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.788129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.788248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.788274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.788372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.788399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.788547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.821 [2024-07-15 23:51:47.788578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.821 qpair failed and we were unable to recover it. 00:25:12.821 [2024-07-15 23:51:47.788677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.788704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.788789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.788815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.788912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.788939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.789972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.789998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.790894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.790921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.791870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.792945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.792979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.793931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.793968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.794096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.794124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.794248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.794275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.822 qpair failed and we were unable to recover it. 00:25:12.822 [2024-07-15 23:51:47.794374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.822 [2024-07-15 23:51:47.794400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.794496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.794522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.794680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.794709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.794831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.794858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.794965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.794992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.795913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.795940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.796881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.796979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.797902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.797928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.798887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.798983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.799929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.799975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.800079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.800107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.800227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.800253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.800351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.800379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.823 [2024-07-15 23:51:47.800488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.823 [2024-07-15 23:51:47.800514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.823 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.800650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.800689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.800816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.800845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.800971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.800999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.801970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.802861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.802887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.803874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.803997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.804920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.804946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.805893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.805920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.806039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.806066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.806168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.806314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.806341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.806470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.806496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.824 [2024-07-15 23:51:47.806617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.824 [2024-07-15 23:51:47.806643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.824 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.806741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.806768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.806894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.806920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.807896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.807989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.808853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.808976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.809972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.809999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.810919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.810946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.811918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.825 [2024-07-15 23:51:47.811944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.825 qpair failed and we were unable to recover it. 00:25:12.825 [2024-07-15 23:51:47.812049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.812895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.812989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.813880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.813906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.814854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.814972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.815914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.815953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.816076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.816104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.816209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.816235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.816355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.816382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.816504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.816531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.826 [2024-07-15 23:51:47.816624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.826 [2024-07-15 23:51:47.816650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.826 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.816749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.816777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.816913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.816952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.817901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.817927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.818889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.818987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.819913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.819939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.820944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.820977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.821968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.821996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.822855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.822977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.823856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.823981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.824007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.824130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.824156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.824303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.827 [2024-07-15 23:51:47.824329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.827 qpair failed and we were unable to recover it. 00:25:12.827 [2024-07-15 23:51:47.824428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.824454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.824557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.824583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.824678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.824704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.824828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.824854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.824968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.825938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.825970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.826968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.826995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.827150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.827176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.827320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.827346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.827581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.827616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.827764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.827790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.827888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.827913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.828876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.828977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.829883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.829985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.830934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.830965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.828 qpair failed and we were unable to recover it. 00:25:12.828 [2024-07-15 23:51:47.831818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.828 [2024-07-15 23:51:47.831844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.831937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.831968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.832853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.832996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.833855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.833996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.834173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.834292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.834443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.834584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.834806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.834841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.835863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.835982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.836909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.836935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.837850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.829 [2024-07-15 23:51:47.837875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.829 qpair failed and we were unable to recover it. 00:25:12.829 [2024-07-15 23:51:47.838000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.838902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.838942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.839874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.839900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.840966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.840994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.841864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.841889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.842853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.842893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.843971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.843997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.844890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.844917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.830 [2024-07-15 23:51:47.845874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.830 qpair failed and we were unable to recover it. 00:25:12.830 [2024-07-15 23:51:47.845984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.846857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.846966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.847006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.847183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.847220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.847407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.847443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.847626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.847662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.847787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.847822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.847977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.848170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.848336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.848534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.848727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.848903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.848984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.849943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.849976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.850918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.850944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.851821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.851857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.852016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.852053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.852208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.852243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.852397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.852431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.852588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.852623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.852773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.852839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.853032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.853093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.853277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.853336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.853486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.853545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.853733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.853768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.853928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.853970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.854129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.854165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.854303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.854492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.854526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.857077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.857113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.857271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.857306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.857489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.857524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.857710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.857745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.857899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.831 [2024-07-15 23:51:47.857934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.831 qpair failed and we were unable to recover it. 00:25:12.831 [2024-07-15 23:51:47.858079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.858114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.858301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.858335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.858474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.858515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.858668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.858704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.858832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.858868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.859922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.859970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.860107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.860143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.860276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.860312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.860505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.860541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.860703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.860739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.860867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.860903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.861163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.861366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.861564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.861684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.861825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.861944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.862210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.862386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.862567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.862753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.862937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.862988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.863158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.863197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.863398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.863437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.863583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.863623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.863775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.863813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.863970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.864010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.864181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.864219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.864390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.864428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.864600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.832 [2024-07-15 23:51:47.864639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.832 qpair failed and we were unable to recover it. 00:25:12.832 [2024-07-15 23:51:47.864772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.864810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.865020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.865059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.865226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.865265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.865420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.865458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.865658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.865696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.865896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.865934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.866094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.866132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.866264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.866310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.866464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.866651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.866689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.866862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.866900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.867039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.867078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.867277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.867475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.867514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.867713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.867752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.867921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.867969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.868122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.868148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.868279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.868304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.868471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.868535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.868780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.868819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.869943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.870140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.870179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.870363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.870389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.870515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.870541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.870648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.870687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.870866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.870905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.871086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.871125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.871295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.871333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.871538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.871577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.871811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.871869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.872049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.833 [2024-07-15 23:51:47.872092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.833 qpair failed and we were unable to recover it. 00:25:12.833 [2024-07-15 23:51:47.872276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.872315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.872483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.872522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.872694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.872734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.872906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.872944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.873129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.873155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.873304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.873329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.873494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.873532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.873711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.873749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.873968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.874030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.874201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.874241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.874385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.874423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.874569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.874618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.874823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.874849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.874980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.875770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.875986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.876211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.876421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.876556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.876705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.876852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.876904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.877157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.877223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.877426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.877470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.877662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.877702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.877923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.877949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:12.834 [2024-07-15 23:51:47.878066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.834 [2024-07-15 23:51:47.878093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:12.834 qpair failed and we were unable to recover it. 00:25:13.116 [2024-07-15 23:51:47.878270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.878310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.878471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.878512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.878686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.878726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.878948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.879002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.879189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.879404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.879445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.879617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.879658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.879858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.879901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.880893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.880918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.881803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.881978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.882021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.882209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.882250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.882405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.882446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.882632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.882672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.882861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.882902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.883064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.883106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.883282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.883323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.883482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.883523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.883674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.883714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.883893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.883934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.884113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.884154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.884310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.884350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.884492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.884533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.884709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.884750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.884922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.884985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.885152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.885378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.885428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.885612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.885659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.885882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.117 [2024-07-15 23:51:47.885930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.117 qpair failed and we were unable to recover it. 00:25:13.117 [2024-07-15 23:51:47.886157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.886207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.886437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.886488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.886768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.886816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.887021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.887281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.887322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.887525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.887598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.887844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.887893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.888116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.888166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.888362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.888411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.888647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.888695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.888968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.889030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.889210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.889251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.889435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.889475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.889688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.889728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.889903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.889944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.890132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.890172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.890324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.890364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.890541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.890582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.890765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.890805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.890965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.891930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.891969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.892889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.892931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.893072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.893098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.893196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.893221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.893386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.893436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.893616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.893659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.893849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.894064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.894108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.118 [2024-07-15 23:51:47.894323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.118 [2024-07-15 23:51:47.894366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.118 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.894489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.894516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.894616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.894642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.894746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.894773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.894870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.894897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.895887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.895928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.896081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.896284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.896325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.896468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.896509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.896672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.896707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.896828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.897864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.897897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.898071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.898117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.898285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.898327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.898482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.898522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.898698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.898738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.898928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.899003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.899161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.899202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.899391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.899432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.899572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.899614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.899768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.899808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.899996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.900037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.900189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.900231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.900406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.900447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.900599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.900640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.900794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.900834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.901010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.901058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.901198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.901240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.901417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.901457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.901609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.901650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.901844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.119 [2024-07-15 23:51:47.901885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.119 qpair failed and we were unable to recover it. 00:25:13.119 [2024-07-15 23:51:47.902031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.902072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.902220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.902260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.902407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.902449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.902654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.902695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.902870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.902910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.903126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.903322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.903363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.903545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.903586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.903790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.903836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.904040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.904081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.904222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.904263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.904471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.904512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.904690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.904730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.904876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.904917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.905125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.905187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.905386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.905430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.905607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.905648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.905826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.905867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.906045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.906087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.906233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.906274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.906424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.906466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.906608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.906651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.906826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.906876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.907051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.907094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.907244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.907285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.907487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.907528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.907701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.907742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.907892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.907934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.908124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.908165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.908356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.908397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.908576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.908617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.908790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.908841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.909026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.909068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.909257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.909298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.909477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.909518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.909657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.909708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.909942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.910018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.910198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.910242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.910397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.910441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.910594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.910640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.120 [2024-07-15 23:51:47.910802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.120 [2024-07-15 23:51:47.910845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.120 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.911007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.911052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.911203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.911246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.911459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.911503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.911667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.911712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.911876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.911919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.912104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.912149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.912303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.912349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.912525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.912568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.912736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.912781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.913000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.913046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.913204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.913248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.913434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.913477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.913698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.913741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.913907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.913950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.914113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.914156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.914341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.914383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.914602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.914646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.914872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.914916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.915099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.915143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.915312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.915569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.915618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.915876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.915943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.916186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.916230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.916416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.916461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.916673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.916717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.916969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.917014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.917174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.917218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.917410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.917454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.917650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.917693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.917842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.917884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.918088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.918133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.918325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.121 [2024-07-15 23:51:47.918369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.121 qpair failed and we were unable to recover it. 00:25:13.121 [2024-07-15 23:51:47.918555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.918598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.918761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.918804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.919024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.919075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.919297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.919340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.919526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.919570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.919764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.919806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.919972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.920019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.920214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.920259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.920478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.920521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.920712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.920755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.920907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.920951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.921150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.921195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.921386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.921430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.921604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.921647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.921798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.921886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.922147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.922192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.922389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.922433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.922598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.922641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.922799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.922843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.923038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.923084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.923250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.923294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.923454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.923498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.923653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.923696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.923884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.923927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.924106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.924149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.924373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.924417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.924612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.924656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.924850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.924899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.925076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.925120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.925307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.925357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.925547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.925751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.925795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.925964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.926010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.926215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.926457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.926503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.926725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.926771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.926941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.927004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.927232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.927278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.927504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.927551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.927746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.927792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.927987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.122 [2024-07-15 23:51:47.928035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.122 qpair failed and we were unable to recover it. 00:25:13.122 [2024-07-15 23:51:47.928216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.928262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.928428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.928474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.928645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.928691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.928866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.928913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.929103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.929150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.929343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.929388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.929586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.929632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.929793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.929840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.930032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.930080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.930312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.930359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.930531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.930579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.930750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.930796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.930976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.931023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.931260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.931503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.931551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.931723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.931771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.931979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.932026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.932259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.932305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.932531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.932577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.932804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.932849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.933035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.933082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.933259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.933304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.933468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.933515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.933733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.933780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.933948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.934007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.934234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.934279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.934482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.934528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.934697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.934745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.934946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.935009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.935193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.935239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.935470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.935516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.935696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.935748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.935973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.936023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.936248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.936297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.936480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.936529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.936712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.936761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.936994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.937041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.937207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.937253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.937476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.937522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.937698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.937743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.123 [2024-07-15 23:51:47.937984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.123 [2024-07-15 23:51:47.938034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.123 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.938201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.938252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.938444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.938495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.938708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.938758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.938971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.939021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.939196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.939246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.939463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.939514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.939726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.939775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.939966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.940016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.940196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.940245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.940432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.940480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.940698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.940950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.941014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.941182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.941231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.941434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.941483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.941726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.941775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.941984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.942034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.942242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.942291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.942499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.942548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.942722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.942771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.943010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.943061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.943244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.943294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.943481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.943532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.943716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.943767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.943974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.944025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.944236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.944285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.944507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.944555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.944736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.944784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.944995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.945057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.945265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.945315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.945523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.945573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.945788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.945838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.946016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.946068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.946251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.946302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.946538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.946587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.946754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.946803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.947005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.947055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.947228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.947279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.947527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.947577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.947749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.947798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.947995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.948044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.124 [2024-07-15 23:51:47.948253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.124 [2024-07-15 23:51:47.948302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.124 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.948554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.948603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.948837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.948885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.949067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.949118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.949307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.949357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.949591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.949641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.949850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.949899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.950127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.950176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.950394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.950443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.950665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.950714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.950897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.950945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.951160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.951210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.951441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.951490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.951701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.951749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.951988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.952041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.952235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.952289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.952523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.952575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.952760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.952815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.953047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.953101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.953290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.953342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.953553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.953604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.953817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.953869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.125 qpair failed and we were unable to recover it. 00:25:13.125 [2024-07-15 23:51:47.954123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.125 [2024-07-15 23:51:47.954176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.954368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.954422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.954640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.954692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.954925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.954998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.955198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.955252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.955483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.955543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.955768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.955819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.956093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.956147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.956372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.956424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.956644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.956695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.956912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.956978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.957219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.957271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.957497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.957548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.957805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.957856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.958085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.126 [2024-07-15 23:51:47.958138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.126 qpair failed and we were unable to recover it. 00:25:13.126 [2024-07-15 23:51:47.958368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.958420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.958673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.958724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.958899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.958950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.959178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.959470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.959521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.959735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.959787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.960000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.960056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.960276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.960329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.960525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.960577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.960833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.960884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.961112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.961166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.961373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.961425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.961632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.961683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.961880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.961931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.962117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.962153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.962342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.962377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.962577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.962628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.962844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.962896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.963093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.963129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.963292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.963327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.963529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.963583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.963866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.963930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.964210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.964245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.964477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.964530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.964708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.964760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.964995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.965031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.965165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.965200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.965399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.965451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.965656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.965707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.965896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.965949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.966176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.966217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.966449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.966484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.966666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.966721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.966908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.966976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.967167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.967204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.967437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.967493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.967691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.967748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.968002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.968039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.968194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.968229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.968458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.968510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.968760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.127 [2024-07-15 23:51:47.968812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.127 qpair failed and we were unable to recover it. 00:25:13.127 [2024-07-15 23:51:47.969064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.969102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.969331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.969387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.969627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.969683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.969970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.970027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.970297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.970352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.970614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.970679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.970927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.970996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.971287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.971343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.971583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.971634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.971878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.971942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.972252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.972308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.972524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.972590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.972834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.972898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.973198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.973234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.973380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.973416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.973640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.973690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.973997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.974055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.974326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.974381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.974623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.974678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.974913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.975181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.975236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.975501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.975556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.975823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.975880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.976170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.976206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.976389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.976425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.976672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.976729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.977001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.977058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.977376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.977441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.977733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.977789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.978034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.978099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.978325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.978360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.978545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.978600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.978842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.978897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.979126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.979181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.979424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.979482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.979750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.979806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.980074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.980131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.980383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.980438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.980669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.980724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.980968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.981025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.128 qpair failed and we were unable to recover it. 00:25:13.128 [2024-07-15 23:51:47.981227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.128 [2024-07-15 23:51:47.981285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.981540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.981592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.981784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.981854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.982086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.982143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.982366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.982422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.982663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.982721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.983009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.983070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.983322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.983382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.983627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.983686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.983934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.984013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.984248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.984306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.984554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.984589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.984717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.984752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.984909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.984984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.985243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.985303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.985596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.985839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.985891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.986096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.986148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.986389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.986448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.986694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.986756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.987016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.987078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.987344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.987403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.987638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.987706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.988030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.988364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.988424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.988668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.988728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.988949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.989021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.989264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.989323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.989573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.989632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.989883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.989952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.990209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.990244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.990384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.990421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.990678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.990738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.990987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.991041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.991226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.991279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.991489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.991542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.991791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.991853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.992120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.992181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.992466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.992525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.992835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.992900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.993214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.129 qpair failed and we were unable to recover it. 00:25:13.129 [2024-07-15 23:51:47.993585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.129 [2024-07-15 23:51:47.993650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.993930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.994002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.994252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.994568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.994604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.994761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.994796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.995022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.995084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.995334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.995394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.995642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.995704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.995952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.996026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.996245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.996309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.996562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.996616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.996819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.996893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.997110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.997170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.997379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.997439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.997690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.997725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.997852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.997888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.998049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.998086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.998317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.998353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.998503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.998557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.998833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.999170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.999230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.999479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.999541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:47.999755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:47.999817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.000139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.000206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.000476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.000513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.000679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.000714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.000953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.001049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.001311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.001370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.001601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.001645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.001804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.001840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.002080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.002142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.002403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.002462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.002745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.002805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.003053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.003113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.003359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.130 [2024-07-15 23:51:48.003418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.130 qpair failed and we were unable to recover it. 00:25:13.130 [2024-07-15 23:51:48.003662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.003721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.004041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.004077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.004237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.004273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.004485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.004520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.004656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.004691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.004829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.004864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.005019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.005087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.005415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.005475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.005733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.005794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.006100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.006165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.006488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.006552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.006870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.006936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.007192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.007253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.007535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.007594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.007879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.007939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.008222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.008287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.008529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.008595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.008863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.008917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.009191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.009260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.009602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.009674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.010017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.010086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.010331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.010396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.010667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.010733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.011016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.011083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.011347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.011412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.011690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.011754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.012064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.012131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.012437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.012503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.012754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.012818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.013059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.013126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.013437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.013502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.013769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.013834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.014091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.014157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.014435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.014510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.014802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.014866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.015149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.015217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.015496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.015563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.015843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.015911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.016236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.016303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.016583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.016647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.131 [2024-07-15 23:51:48.016910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.131 [2024-07-15 23:51:48.016989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.131 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.017240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.017307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.017579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.017644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.017919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.018004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.018277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.018341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.018648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.018713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.018994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.019061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.019347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.019414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.019687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.019755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.020046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.020431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.020496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.020782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.020845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.021111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.021177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.021403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.021470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.021731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.021783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.021985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.022065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.022382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.022447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.022746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.022810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.023075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.023142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.023413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.023477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.023747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.023814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.024060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.024125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.024398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.024466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.024705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.024772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.025095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.025162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.025468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.025532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.025853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.025917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.026250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.026315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.026618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.026682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.026999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.027066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.027347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.027413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.027689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.027756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.028078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.028145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.028419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.028497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.028816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.028881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.029167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.029235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.029502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.029567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.029831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.029895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.030184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.030251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.030521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.030589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.030882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.030947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.132 [2024-07-15 23:51:48.031279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.132 [2024-07-15 23:51:48.031343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.132 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.031603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.031668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.032013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.032080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.032370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.032434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.032742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.033088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.033156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.033483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.033548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.033779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.033844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.034159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.034212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.034492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.034556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.034810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.034875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.035155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.035223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.035499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.035567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.035887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.035952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.036212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.036278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.036549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.036614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.036889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.037200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.037268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.037575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.037641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.037977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.038044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.038353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.038417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.038685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.038750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.039013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.039081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.039312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.039377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.039690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.039755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.040025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.040092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.040395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.040460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.040738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.040803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.041112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.041178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.041498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.041564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.041840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.041904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.042193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.042259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.042498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.042575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.042832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.042897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.043141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.043208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.043475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.043542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.043858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.043923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.044256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.044321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.044594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.044659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.044911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.044994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.045300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.045366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.133 [2024-07-15 23:51:48.045668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.133 [2024-07-15 23:51:48.045733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.133 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.045971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.046025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.046266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.046330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.046605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.046669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.046950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.047032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.047349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.047413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.047728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.047793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.048067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.048134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.048365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.048433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.048745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.048811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.049085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.049151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.049442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.049507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.049748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.049815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.050097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.050164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.050435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.050499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.050771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.050836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.051112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.051179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.051458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.051524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.051766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.051835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.052086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.052152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.052426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.052493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.052748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.052782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.052931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.052974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.053152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.053327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.053505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.053689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.053850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.053973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.054008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.054136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.054170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.054301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.054334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.054501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.054543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.054697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.054997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.055032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.055182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.055217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.055468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.055533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.055796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.055860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.056143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.056178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.056405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.056440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.056604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.056671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.056938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.057022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.134 qpair failed and we were unable to recover it. 00:25:13.134 [2024-07-15 23:51:48.057208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.134 [2024-07-15 23:51:48.057242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.057579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.057643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.057973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.058032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.058159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.058193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.058411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.058446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.058657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.058709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.058928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.059010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.059178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.059212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.059401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.059732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.059797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.060055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.060090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.060218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.060284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.060592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.060657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.060977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.061042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.061200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.061235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.061546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.061611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.061877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.061941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.062160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.062194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.062441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.062505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.062832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.062897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.063107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.063141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.063298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.063332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.063485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.063534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.063670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.063743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.064009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.064044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.064195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.064229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.064405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.064440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.064716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.064781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.064980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.065030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.065156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.065191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.065415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.065494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.065794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.065859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.066110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.066144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.066264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.066299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.135 [2024-07-15 23:51:48.066439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.135 [2024-07-15 23:51:48.066475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.135 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.066597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.066633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.066951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.067028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.067159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.067194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.067384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.067436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.067702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.067755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.068001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.068037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.068186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.068219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.068431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.068466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.068621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.068657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.068897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.068930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.069096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.069130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.069307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.069380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.069661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.069725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.069991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.070056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.070317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.070384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.070654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.070706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.070897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.070951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.071240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.071536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.071601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.071874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.072276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.072342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.072620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.072684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.072978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.073018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.073178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.073213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.073435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.073500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.073768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.073802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.073950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.073994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.074149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.074182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.074437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.074502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.074770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.074834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.075100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.075153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.075395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.075459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.075760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.075825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.076092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.076128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.076279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.076314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.076552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.076617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.076929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.077017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.077309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.077373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.077631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.077666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.077788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.077823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.136 qpair failed and we were unable to recover it. 00:25:13.136 [2024-07-15 23:51:48.077964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.136 [2024-07-15 23:51:48.077999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.078235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.078300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.078576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.078641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.078921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.079004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.079321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.079354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.079535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.079569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.079813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.079880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.080193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.080268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.080578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.080630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.080874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.080926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.081178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.081245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.081527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.081591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.081830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.081897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.082197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.082263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.082566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.082631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.082846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.082911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.083237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.083301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.083613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.083677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.083947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.084063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.084350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.084416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.084722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.084786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.085036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.085104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.085425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.085500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.085771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.085836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.086066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.086131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.086407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.086474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.086785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.086849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.087180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.087246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.087565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.087630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.087947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.088026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.088295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.088362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.088677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.088728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.088922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.089022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.089296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.089364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.089639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.089704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.089992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.090061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.090444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.090509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.090777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.090844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.091159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.091226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.091494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.091560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.091861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.137 [2024-07-15 23:51:48.091926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.137 qpair failed and we were unable to recover it. 00:25:13.137 [2024-07-15 23:51:48.092234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.092302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.092579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.092645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.092950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.093031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.093309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.093375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.093658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.093725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.093975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.094045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.094358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.094422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.094732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.094806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.095093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.095162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.095463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.095497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.095631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.095665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.095861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.095924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.096262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.096329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.096600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.096665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.096990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.097056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.097325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.097391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.097632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.097698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.097974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.098041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.098354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.098419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.098674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.098708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.098873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.098943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.099219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.099260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.099413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.099446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.099741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.099805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.100072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.100107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.100256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.100289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.100504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.100569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.100884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.100948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.101270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.101334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.101624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.101692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.101938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.102024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.102302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.102367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.102667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.102732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.103026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.103093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.103381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.103446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.103777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.103842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.104111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.104180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.104448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.104514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.104741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.104810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.138 [2024-07-15 23:51:48.105138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.138 [2024-07-15 23:51:48.105206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.138 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.105528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.105593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.105861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.105926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.106221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.106287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.106589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.106655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.106985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.107052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.107375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.107753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.107820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.108097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.108166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.108454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.108520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.108814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.108855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.108979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.109014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.109142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.109175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.109374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.109439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.109721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.109785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.110041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.110109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.110387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.110452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.110729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.110762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.110886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.110921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.111188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.111495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.111562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.111788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.111822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.111976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.112019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.112151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.112185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.112339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.112373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.112607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.112641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.112821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.112854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.113141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.113208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.113487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.113552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.113853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.113887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.114063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.114098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.114375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.114441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.114724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.114789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.115071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.115137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.115409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.115444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.115585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.115620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.115923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.116018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.116273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.116339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.139 qpair failed and we were unable to recover it. 00:25:13.139 [2024-07-15 23:51:48.116580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.139 [2024-07-15 23:51:48.116646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.116917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.117012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.117307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.117372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.117657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.117722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.117993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.118070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.118306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.118370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.118624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.118659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.118787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.118821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.118980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.119018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.119276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.119340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.119569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.119622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.119853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.119893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.120081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.120134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.120410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.120474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.120716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.120781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.121095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.121129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.121309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.121343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.121553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.121588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.121739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.121938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.122015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.122304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.122369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.122654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.122707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.122949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.123032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.123341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.123405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.123712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.123787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.124072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.124139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.124381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.124450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.124729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.124796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.125089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.125158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.125480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.125548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.125805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.125870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.126133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.126199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.126476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.126544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.126852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.126917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.127216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.127287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.127578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.127613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.127786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.127821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.128095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.128162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.128428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.128493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.128803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.128868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.129157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.129224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.129487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.129552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.129905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.129983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.130302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.130367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.140 [2024-07-15 23:51:48.130635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.140 [2024-07-15 23:51:48.130701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.140 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.130990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.131064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.131374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.131439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.131702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.132040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.132106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.132373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.132744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.132797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.132991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.133045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.133228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.133281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.134235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.134432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.134580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.134755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.134878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.134977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.135026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.135160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.135196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.135452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.135520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.135766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.135831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.136063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.136092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.136190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.136242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.136462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.136552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.136746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.136812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.136950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.136983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.137844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.137870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.138942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.138975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.139910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.139972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.140125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.140151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.140252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.141 [2024-07-15 23:51:48.140404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.141 [2024-07-15 23:51:48.140430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.141 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.140579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.140605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.140757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.140783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.140914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.140940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.141908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.141934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.142970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.142998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.143893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.143919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.144913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.144939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.145860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.145886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.146895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.146921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.142 qpair failed and we were unable to recover it. 00:25:13.142 [2024-07-15 23:51:48.147029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.142 [2024-07-15 23:51:48.147055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.147862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.147889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.148907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.148933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.149844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.149977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.150972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.150999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.151883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.151909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.152007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.152033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.152148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.152189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.152293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.143 [2024-07-15 23:51:48.152321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.143 qpair failed and we were unable to recover it. 00:25:13.143 [2024-07-15 23:51:48.152430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.152458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.152565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.152591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.152714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.152740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.152876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.152902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.153952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.153983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.154929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.155932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.156898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.156924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.157874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.158890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.158915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.159025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.159052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.159158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.144 [2024-07-15 23:51:48.159184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.144 qpair failed and we were unable to recover it. 00:25:13.144 [2024-07-15 23:51:48.159309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.159334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.159440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.159466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.159603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.159642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.159755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.159782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.159885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.159911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.160965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.160992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.161924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.161949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.162868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.162982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.163892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.163919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.164954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.164988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.165088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.165114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.165212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.165239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.165342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.165368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.165477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.165503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.145 [2024-07-15 23:51:48.165589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.145 [2024-07-15 23:51:48.165616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.145 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.165737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.165778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.165890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.165918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.166848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.166970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.167888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.167915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.168947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.168980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.169970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.169996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.170920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.170946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.171924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.171952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.146 qpair failed and we were unable to recover it. 00:25:13.146 [2024-07-15 23:51:48.172062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.146 [2024-07-15 23:51:48.172088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.172205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.172230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.172366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.172398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.172521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.172547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.172670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.172696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.172886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.172919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.173898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.173924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.174946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.174981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.175881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.175907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.176909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.176936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.177851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.177877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.178002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.178030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.178146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.178172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.147 [2024-07-15 23:51:48.178262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.147 [2024-07-15 23:51:48.178288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.147 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.178407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.178433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.178527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.178672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.178699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.178815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.178841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.178967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.178993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.179861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.179887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.180906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.180932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.181864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.182843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.182869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.183873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.183986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.184863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.184896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.185036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.185064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.185197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.185241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.185436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.148 [2024-07-15 23:51:48.185481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.148 qpair failed and we were unable to recover it. 00:25:13.148 [2024-07-15 23:51:48.185667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.185693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.185858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.185884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.186924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.186949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.187078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.187195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.187390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.187616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.187823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.187997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.188119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.188313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.188499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.188734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.188943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.188985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.189113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.189138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.189232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.189258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.189409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.189452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.189595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.189640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.189851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.189884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.190044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.190071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.190173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.190198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.190335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.190378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.190572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.190615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.190860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.190902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.191093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.191235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.191455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.191662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.191860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.191984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.192010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.192133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.192159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.192290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.192333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.192590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.192660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.192861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.192903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.193087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.193114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.193216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.193241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.193366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.193391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.193603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.193629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.193805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.193868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.194058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.194177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.194372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.194594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.194808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.194962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.195011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.195102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.149 [2024-07-15 23:51:48.195128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.149 qpair failed and we were unable to recover it. 00:25:13.149 [2024-07-15 23:51:48.195324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.195371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.195504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.195561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.195762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.195804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.196011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.196038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.196139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.196164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.196294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.196320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.196510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.196562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.196811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.196853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.197038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.197066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.197188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.197214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.197354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.197409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.197592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.197634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.197830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.197862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.198909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.198952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.199099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.199125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.199253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.199296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.199477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.199524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.199670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.199727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.199907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.199939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.200078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.200104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.200204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.200229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.200346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.200372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.200493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.200536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.200783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.200826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.201826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.201869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.202942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.202992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.203167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.203210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.203375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.203420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.203643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.203676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.203886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.150 [2024-07-15 23:51:48.203928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.150 qpair failed and we were unable to recover it. 00:25:13.150 [2024-07-15 23:51:48.204130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.204280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.204315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.204486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.204540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.204787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.204830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.204976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.205027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.205152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.205185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.205323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.205356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.205550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.205583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.205815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.205858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.206061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.206097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.206254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.206287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.206485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.206518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.206691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.206732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.206939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.206986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.207110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.207145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.207296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.207329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.207486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.207529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.207737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.207781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.207926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.207969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.208156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.208190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.208341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.208385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.208592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.208634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.208823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.208865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.209025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.209059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.209187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.209221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.209443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.209492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.209658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.209717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.209884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.209917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.210077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.210110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.210244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.210312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.210508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.210551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.210758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.210820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.210939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.210989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.211115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.211147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.211274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.211332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.211501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.211553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.211765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.211821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.212017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.212051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.212199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.212232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.212384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.212416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.212622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.212675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.212897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.212942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.213143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.213177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.213326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.213360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.213515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.213572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.213770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.213816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.214020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.214054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.214201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.214255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.214446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.214489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.214688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.214730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.214933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.215002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.215155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.215189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.215365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.215418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.215578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.215613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.151 [2024-07-15 23:51:48.215789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.151 [2024-07-15 23:51:48.215832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.151 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.216039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.216074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.216227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.216261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.216405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.216448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.216651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.216694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.216878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.216921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.217087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.217121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.217283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.217325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.217506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.217751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.217793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.217984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.218018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.218150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.218183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.218390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.218423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.152 [2024-07-15 23:51:48.218633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.152 [2024-07-15 23:51:48.218678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.152 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.218851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.218899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.219121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.219155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.219286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.219342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.219548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.219598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.219780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.219828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.220044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.220078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.220207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.220241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.220379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.428 [2024-07-15 23:51:48.220418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 23:51:48.220620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.220663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.220817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.220860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.221053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.221087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.221258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.221301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.221464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.221520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.221691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.221747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.221893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.221927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.222069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.222103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.222226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.222259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.222443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.222486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.222688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.222733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.222940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.223132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.223167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.223358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.223403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.223586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.223630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.223855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.223900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.224087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.224123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.224239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.224282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.224436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.224470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.224717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.224770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.224939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.224993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.225157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.225371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.225520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.225686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.225840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.225986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.226038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.226230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.226288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.226517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.226564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.226754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.226800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.227024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.227059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.227209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.227243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.227413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.227462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.227657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.227703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.227894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.227939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.228112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.228147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.429 qpair failed and we were unable to recover it. 00:25:13.429 [2024-07-15 23:51:48.228302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.429 [2024-07-15 23:51:48.228335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.228522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.228567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.228841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.228900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.229114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.229148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.229325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.229369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.229571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.229616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.229844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.229903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.230096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.230129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.230335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.230380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.230567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.230613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.230791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.231083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.231117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.231229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.231262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.231461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.231504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.231731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.231775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.232004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.232039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.232188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.232222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.232480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.232524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.232723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.232766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.232976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.233031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.233211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.233263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.233489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.233533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.233768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.233811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.233974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.234029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.234151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.234183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.234350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.234395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.234588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.234635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.234839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.234894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.235072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.235106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.235257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.235290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.235441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.235514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.235711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.235756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.235910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.235986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.236165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.236198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.236428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.236473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.430 qpair failed and we were unable to recover it. 00:25:13.430 [2024-07-15 23:51:48.236637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.430 [2024-07-15 23:51:48.236682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.236890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.236935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.237155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.237201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.237432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.237477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.237692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.237743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.237975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.238010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.238137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.238171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.238428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.238476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.238692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.238740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.238949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.239021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.239256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.239308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.239510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.239558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.239761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.239808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.240025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.240073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.240289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.240346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.240590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.240639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.240835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.240884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.241101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.241150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.241367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.241416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.241605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.241654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.241851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.241885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.242057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.242091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.242325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.242373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.242579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.242628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.242792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.242839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.243050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.243098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.243280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.243326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.243500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.243546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.243706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.243753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.244005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.244054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.244253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.244301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.244501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.244550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.244751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.244800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.245012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.245060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.245292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.245339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.245580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.245636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.245876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.245923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.246146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.431 [2024-07-15 23:51:48.246195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.431 qpair failed and we were unable to recover it. 00:25:13.431 [2024-07-15 23:51:48.246371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.246420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.246593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.246640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.246855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.246902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.247147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.247197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.247409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.247457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.247692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.247740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.248059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.248112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.248306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.248358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.248563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.248614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.248810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.248862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.249105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.249159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.249389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.249442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.249635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.249686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.249902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.249967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.250196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.250248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.250451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.250502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.250737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.250787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.251045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.251079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.251239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.251274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.251476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.251527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.251785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.251836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.252029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.252082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.252308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.252360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.252578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.252636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.252938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.253027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.253227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.253279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.253498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.253549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.253779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.253830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.254059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.254114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.254382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.254431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.254596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.254645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.254887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.254938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.255216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.255273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.255526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.255577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.255830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.255881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.256135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.256188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.432 [2024-07-15 23:51:48.256369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.432 [2024-07-15 23:51:48.256420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.432 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.256687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.256743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.257009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.257084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.257390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.257467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.257708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.257759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.257986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.258039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.258281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.258538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.258586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.258812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.258864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.259158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.259208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.259419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.259485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.259738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.259790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.260007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.260059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.260310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.260362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.260612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.260663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.260868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.260920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.261127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.261179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.261404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.261455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.261679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.261733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.261953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.262019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.262265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.262317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.262529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.262580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.262798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.262849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.263073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.263130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.263350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.263384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.263529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.263584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.263864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.263919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.264182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.264238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.264451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.433 [2024-07-15 23:51:48.264508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.433 qpair failed and we were unable to recover it. 00:25:13.433 [2024-07-15 23:51:48.264731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.264785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.264995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.265053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.265282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.265317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.265497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.265828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.265883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.266134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.266190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.266457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.266512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.266794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.266849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.267068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.267127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.267379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.267433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.267665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.267720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.267899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.267968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.268241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.268305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.268573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.268628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.268898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.268930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.269070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.269104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.269253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.269287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.269426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.269490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.269687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.269741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.270051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.270106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.270314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.270369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.270639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.270694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.270925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.270995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.271208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.271262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.271465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.271519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.271766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.271799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.271979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.272013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.272140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.272202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.272405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.272459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.272737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.272786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.273049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.273106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.273351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.273405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.273673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.273729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.274003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.274061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.274341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.274395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.274617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.274651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.274773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.274806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.274979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.275036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.275251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.275284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.275439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.275473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.275637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.434 [2024-07-15 23:51:48.275708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.434 qpair failed and we were unable to recover it. 00:25:13.434 [2024-07-15 23:51:48.275980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.276038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.276272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.276327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.276582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.276630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.276842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.276913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.277839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.277900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.278197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.278233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.278363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.278395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.278545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.278579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.278702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.278736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.278891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.278924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.279089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.279123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.279255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.279294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.279453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.279525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.279754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.279809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.280077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.280134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.280342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.280398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.280599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.280661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.280896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.280950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.281182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.281237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.281501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.281535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.281715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.281749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.281977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.282033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.282244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.282298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.282527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.282581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.282777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.282832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.283079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.283136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.283315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.283371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.283600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.283654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.283856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.283911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.284205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.284262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.284490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.284545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.284776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.284832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.285039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.285095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.285318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.285373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.285611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.285659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.285868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.285937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.286191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.286246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.286439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.286515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.286791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.286840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.287056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.287127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.435 [2024-07-15 23:51:48.287369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.435 [2024-07-15 23:51:48.287424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.435 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.287693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.287748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.287985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.288040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.288311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.288370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.288626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.288684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.288981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.289041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.289288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.289347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.289626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.289684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.289928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.290003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.290223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.290294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.290529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.290588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.290801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.290870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.291141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.291175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.291326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.291359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.291536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.291590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.291857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.291911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.292294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.292355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.292570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.292629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.292853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.292911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.293192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.293251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.293499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.293559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.293815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.293873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.294189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.294267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.294524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.294580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.294838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.294896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.295237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.295316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.295595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.295672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.295881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.295940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.296204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.296237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.296378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.296413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.296641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.296700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.296947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.297026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.297316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.297376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.297626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.297659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.297807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.297878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.298155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.298215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.298473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.298533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.298778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.298836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.299099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.299134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.299258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.299293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.299472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.299538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.436 qpair failed and we were unable to recover it. 00:25:13.436 [2024-07-15 23:51:48.299797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.436 [2024-07-15 23:51:48.299856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.300190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.300268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.300555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.300633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.300880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.300939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.301211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.301259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.301436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.301504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.301757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.301818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.302082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.302143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.302392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.302468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.302798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.302876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.303188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.303296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.303569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.303646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.303910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.304002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.304277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.304336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.304612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.304645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.304823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.304882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.305142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.305220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.305496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.305574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.305868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.305927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.306280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.306360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.306597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.306675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.306917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.306993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.307275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.307330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.307585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.307662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.307913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.307998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.308318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.308401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.308700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.308733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.308856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.309059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.309094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.309282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.309316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.309550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.309627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.309873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.309932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.310213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.310273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.310563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.310639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.310884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.437 [2024-07-15 23:51:48.310944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.437 qpair failed and we were unable to recover it. 00:25:13.437 [2024-07-15 23:51:48.311255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.311335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.311643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.311706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.311992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.312075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.312337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.312399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.312672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.312731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.312993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.313056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.313375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.313451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.313725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.313804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.314112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.314505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.314583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.314887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.314947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.315203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.315287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.315586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.315664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.315973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.316034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.316329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.316390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.316632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.316670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.316833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.316867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.317160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.317221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.317480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.317556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.317803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.317880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.318173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.318517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.318593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.318876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.318934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.319238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.319327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.319637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.319698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.319996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.320057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.320366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.320443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.320716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.320791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.321042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.321104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.321436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.321514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.321765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.321798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.321931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.321972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.322150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.322216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.322485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.322561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.322843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.322901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.323195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.323229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.323406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.323439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.323729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.323789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.324057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.324136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.324461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.324537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.438 qpair failed and we were unable to recover it. 00:25:13.438 [2024-07-15 23:51:48.324829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.438 [2024-07-15 23:51:48.324889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.325218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.325305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.325601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.325679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.325911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.325980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.326237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.326318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.326587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.326666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.327021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.327055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.327211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.327245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.327404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.327438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.327691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.327767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.328051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.328085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.328238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.328276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.328577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.328652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.328896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.328969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.329266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.329326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.329642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.329729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.329981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.330041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.330318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.330394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.330676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.330737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.330994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.331054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.331311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.331372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.331636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.331713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.331999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.332061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.332365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.332417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.332648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.332727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.333018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.333080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.333403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.333480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.333764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.333824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.334114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.334175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.334444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.334520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.334834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.334911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.335240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.335331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.335585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.335663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.335931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.336011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.336288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.336366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.336642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.336975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.337036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.337316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.439 [2024-07-15 23:51:48.337394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.439 qpair failed and we were unable to recover it. 00:25:13.439 [2024-07-15 23:51:48.337703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.338027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.338088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.338342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.338420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.338736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.338813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.339149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.339227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.339547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.339623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.339830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.339892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.340202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.340280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.340600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.340678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.340927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.341002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.341313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.341390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.341710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.341788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.342065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.342126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.342413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.342489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.342815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.342875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.343160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.343239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.343546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.343623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.343880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.343948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.344290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.344367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.344640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.344718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.345003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.345062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.345356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.345435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.345703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.346090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.346168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.346445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.346524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.346775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.346836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.347160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.347239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.347505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.347581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.347871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.347930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.348181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.348259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.348522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.348597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.348833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.348892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.349219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.349297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.349576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.349652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.349935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.350012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.350288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.350365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.440 [2024-07-15 23:51:48.350644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.440 [2024-07-15 23:51:48.350720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.440 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.350978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.351039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.351327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.351381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.351647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.351702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.351919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.352208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.352264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.352451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.352505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.352735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.352791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.353063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.353119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.353322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.353376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.353616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.353671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.353898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.353968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.354195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.354250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.354449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.354505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.354719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.354776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.354979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.355035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.355252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.355306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.355530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.355587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.355772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.355827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.356023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.356080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.356271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.356326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.356560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.356626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.356932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.357012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.357283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.357359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.357615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.357691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.357951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.358026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.358294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.358372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.358648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.358726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.359036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.359114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.359394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.359471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.359781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.359857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.360112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.360172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.360439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.360516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.360792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.360869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.361157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.361236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.361482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.361558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.361847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.361906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.441 qpair failed and we were unable to recover it. 00:25:13.441 [2024-07-15 23:51:48.362210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.441 [2024-07-15 23:51:48.362287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.362556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.362634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.362917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.362992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.363315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.363396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.363710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.363786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.364044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.364105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.364391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.364467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.364766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.364842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.365137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.365199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.365507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.365584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.365834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.365892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.366148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.366227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.366504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.366581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.366866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.366925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.367247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.367337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.367597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.367672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.367984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.368040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.368308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.368384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.368642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.368681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.368828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.369056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.369098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.369253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.369294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.369475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.369698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.369738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.369920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.370013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.370334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.370414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.370660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.370737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.370983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.371025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.371190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.371261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.371572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.442 [2024-07-15 23:51:48.371648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.442 qpair failed and we were unable to recover it. 00:25:13.442 [2024-07-15 23:51:48.371901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.371987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.372315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.372400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.372728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.372805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.373085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.373164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.373469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.373545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.373837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.373896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.374149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.374227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.374497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.374573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.374865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.374925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.375216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.375306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.375627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.375682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.375923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.376005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.376264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.376341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.376612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.376688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.376952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.377030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.377321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.377398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.377707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.377782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.378041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.378103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.378388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.378465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.378747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.378824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.379095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.379172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.379485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.379525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.379699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.379739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.379998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.380060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.380337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.380414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.380674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.380751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.380998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.381040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.381188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.381382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.381422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.381667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.381725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.382034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.382115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.382433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.382756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.382816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.383052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.383129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.383447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.383532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.383885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.383944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.384218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.384297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.443 qpair failed and we were unable to recover it. 00:25:13.443 [2024-07-15 23:51:48.384637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.443 [2024-07-15 23:51:48.384713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.385003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.385045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.385311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.385389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.385700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.385776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.386083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.386162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.386483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.386544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.386800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.386861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.387110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.387188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.387452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.387529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.387788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.387828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.388007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.388078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.388331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.388408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.388692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.388769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.389026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.389107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.389417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.389494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.389709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.389770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.389985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.390047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.390293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.390354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.390642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.390702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.390908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.390979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.391230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.391284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.391547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.391599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.391836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.391906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.392197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.392276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.392550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.392639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.392925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.392981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.393189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.393256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.393460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.393511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.393775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.393816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.393989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.394031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.394177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.394217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.394405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.394464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.394714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.394774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.395057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.395129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.395404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.395462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.395702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.395761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.396048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.396106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.396370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.396429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.396686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.444 [2024-07-15 23:51:48.396744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.444 qpair failed and we were unable to recover it. 00:25:13.444 [2024-07-15 23:51:48.397016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.397096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.397345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.397385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.397628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.397709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.397975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.398036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.398300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.398377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.398656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.398752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.399028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.399106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.399357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.399433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.399668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.399745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.400061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.400138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.400438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.400515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.400806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.400864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.401177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.401255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.401537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.401614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.401869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.401928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.402181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.402258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.402546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.402623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.402904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.402980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.403228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.403306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.403625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.403700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.404011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.404072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.404355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.404430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.404687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.404763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.405026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.405086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.405329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.405407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.405647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.405733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.405989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.406303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.406379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.406616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.406692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.406918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.406989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.407227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.407311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.407562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.407638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.407890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.407949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.408245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.408323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.408591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.445 [2024-07-15 23:51:48.408668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.445 qpair failed and we were unable to recover it. 00:25:13.445 [2024-07-15 23:51:48.408885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.408946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.409264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.409346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.409627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.409705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.409944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.410040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.410335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.410411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.410682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.410758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.410982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.411045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.411322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.411399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.411604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.411680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.411934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.412026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.412348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.412429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.412674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.412753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.412998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.413061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.413335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.413683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.413762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.414048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.414127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.414437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.414515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.414796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.414856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.415188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.415279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.415533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.415611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.415887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.415946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.416300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.416635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.416713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.416924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.417011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.417276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.417352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.417662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.417740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.417942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.418020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.418301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.418378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.418648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.418725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.418947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.419023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.419318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.419404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.419690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.419768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.420042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.420103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.420401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.420465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.420721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.420798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.421080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.421159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.421441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.446 [2024-07-15 23:51:48.421519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.446 qpair failed and we were unable to recover it. 00:25:13.446 [2024-07-15 23:51:48.421821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.421897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.422168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.422247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.422563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.422639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.422929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.423004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.423318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.423396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.423676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.423754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.424041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.424103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.424409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.424487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.424720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.424797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.425069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.425147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.425386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.425464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.425692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.425770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.425994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.426056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.426303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.426382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.426666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.426745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.427019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.427099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.427358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.427420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.427753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.427830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.428156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.428234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.428522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.428602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.428902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.428973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.429213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.429290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.429558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.429635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.429838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.429897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.430230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.430313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.430585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.430664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.430918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.430988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.431238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.431315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.431542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.447 [2024-07-15 23:51:48.431621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.447 qpair failed and we were unable to recover it. 00:25:13.447 [2024-07-15 23:51:48.431878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.431937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.432242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.432319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.432615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.432692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.432951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.433028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.433278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.433364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.433679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.433764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.433988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.434050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.434320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.434398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.434672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.434749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.434967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.435028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.435261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.435612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.435671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.435876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.436272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.436595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.436670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.436886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.436947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.437276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.437353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.437622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.437698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.437985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.438046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.438314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.438391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.438678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.438756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.438966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.439306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.439383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.439659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.439736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.440016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.440334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.440411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.440676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.440752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.441034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.441095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.441374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.441753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.441829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.442070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.442159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.442480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.442559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.442770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.442830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.443099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.443179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.443492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.443569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.443828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.443890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.448 [2024-07-15 23:51:48.444157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.448 [2024-07-15 23:51:48.444235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.448 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.444471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.444547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.444807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.444867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.445145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.445221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.445496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.445572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.445788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.445849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.446114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.446193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.446473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.446549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.446772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.446842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.447115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.447193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.447475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.447551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.447834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.447892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.448163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.448240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.448535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.448612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.448876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.448935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.449205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.449543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.449620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.449870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.449929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.450206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.450566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.450644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.450905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.450980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.451270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.451348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.451663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.451749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.452062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.452154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.452439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.452518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.452814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.452893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.453242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.453319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.453520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.453582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.453816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.453875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.454189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.454267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.454597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.454676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.454893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.454952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.455263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.455346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.455614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.455690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.455943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.456033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.456275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.456352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.456630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.456714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.456990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.457051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.457298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.457375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.457639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.457715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.457967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.458029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.449 qpair failed and we were unable to recover it. 00:25:13.449 [2024-07-15 23:51:48.458280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.449 [2024-07-15 23:51:48.458356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.458668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.459005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.459065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.459316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.459395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.459644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.459720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.460002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.460062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.460350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.460427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.460722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.460808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.461042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.461122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.461387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.461464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.461706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.461796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.462108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.462187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.462462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.462538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.462776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.462835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.463132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.463209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.463498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.463575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.463790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.463849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.464132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.464210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.464523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.464600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.464849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.464909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.465163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.465243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.465570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.465896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.465972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.466218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.466297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.466559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.466636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.466882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.466941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.467207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.467286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.467534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.467620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.467872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.467933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.468247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.468325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.468594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.468672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.468889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.468951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.469277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.469355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.469580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.469658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.469884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.469943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.470272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.470349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.470582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.470661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.470905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.470983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.471242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.450 [2024-07-15 23:51:48.471301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.450 qpair failed and we were unable to recover it. 00:25:13.450 [2024-07-15 23:51:48.471563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.471641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.471898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.471976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.472255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.472333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.472562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.472639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.472870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.472930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.473210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.473287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.473553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.473631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.473878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.473937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.474234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.474303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.474597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.474659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.474902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.475286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.475346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.475637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.475697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.475917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.476297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.476358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.476650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.476709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.476992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.477053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.477281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.477359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.477679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.477755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.477995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.478056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.478331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.478392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.478621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.478681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.478934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.479009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.479300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.479378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.479653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.479731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.479969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.480029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.480305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.480381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.480598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.480930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.481006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.481245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.481323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.481634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.481713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.482000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.482061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.482304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.482380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.482642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.482718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.483009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.483069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.451 qpair failed and we were unable to recover it. 00:25:13.451 [2024-07-15 23:51:48.483349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.451 [2024-07-15 23:51:48.483425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.483733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.484141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.484202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.484447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.484524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.484769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.484828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.485110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.485189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.485511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.485588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.485812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.485871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.486148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.486229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.486537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.486621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.486865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.486924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.487197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.487275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.487548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.487627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.487909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.487996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.488263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.488325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.488564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.488641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.488866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.488926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.489228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.489306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.489586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.489664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.489947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.490022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.490270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.490347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.490620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.490696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.490978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.491039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.491284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.491361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.491672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.491749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.492006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.492068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.492342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.492420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.492723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.492799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.493025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.493086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.493354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.493432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.493685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.493761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.494075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.494153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.494474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.494553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.494850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.494908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.495182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.495261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.495540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.495621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.495903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.495974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.496284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.496361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.496653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.496730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.496946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.497024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.497353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.497430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.497750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.452 [2024-07-15 23:51:48.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.452 qpair failed and we were unable to recover it. 00:25:13.452 [2024-07-15 23:51:48.498053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.498115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.498381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.498458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.498700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.498777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.499059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.499138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.499413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.499489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.499740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.499799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.500052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.500403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.500461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.500750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.500809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.501074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.501155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.501451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.501527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.501788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.501857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.502182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.502260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.502585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.502668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.502914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.502990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.503309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.503388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.503624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.503701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.503933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.504022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.504336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.504413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.504668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.504745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.505025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.505104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.505335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.505414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.505718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.505778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.506052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.506129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.506391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.506452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.506677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.506737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.506988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.507048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.507332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.507408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.507683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.507760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.508042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.508120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.508403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.508480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.508763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.508839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.509120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.509199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.509401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.509463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.509724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.509783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.510039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.510120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.510409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.510486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.510735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.510793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.511093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.511172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.511444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.511522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.511775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.511834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.512090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.512169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.512492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.512579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.512807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.512866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.513124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.453 [2024-07-15 23:51:48.513202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.453 qpair failed and we were unable to recover it. 00:25:13.453 [2024-07-15 23:51:48.513495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.513573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.513814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.513875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.514124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.514202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.514484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.514562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.514846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.514906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.515219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.515308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.515607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.515678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.515901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.515977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.516220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.516539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.516615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.516835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.516893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.517233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.517312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.517541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.517619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.517870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.517928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.518194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.518255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.518527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.518605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.518807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.518867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.519175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.519255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.519480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.519562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.519804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.519864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.520213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.520297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.520552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.520628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.520856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.520915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.521174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.521254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.521508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.521586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.521840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.521899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.522163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.522241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.522544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.522621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.522846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.522905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.523192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.523271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.523528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.523590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.523803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.523863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.524127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.524205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.524505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.524582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.524836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.524895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.525150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.525231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.525538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.525616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.525868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.525926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.526170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.526248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.526531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.526609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.526839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.526898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.527163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.527243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.454 [2024-07-15 23:51:48.527523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.454 [2024-07-15 23:51:48.527599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.454 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.527864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.527923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.528257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.528335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.528647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.528724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.529013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.529085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.529334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.529701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.529777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.530040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.530102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.530361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.530439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.530753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.530829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.531060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.531138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.531463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.531798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.531857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.532152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.532230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.532508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.532585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.532815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.532874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.533173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.533253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.533502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.533562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.533820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.533880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.534167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.534247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.534566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.534644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.534887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.534946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.535291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.535371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.535638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.535715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.535971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.536031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.455 [2024-07-15 23:51:48.536308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.455 [2024-07-15 23:51:48.536385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.455 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.536633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.536709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.536915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.536989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.537261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.537342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.537591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.537667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.537897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.537973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.538239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.538318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.538593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.538670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.538921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.539008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.539300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.539379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.539684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.539762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.540019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.540081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.540319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.540397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.540641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.540719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.540950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.541271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.541350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.541592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.736 [2024-07-15 23:51:48.541679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.736 qpair failed and we were unable to recover it. 00:25:13.736 [2024-07-15 23:51:48.541887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.541943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.542200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.542277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.542581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.542917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.542988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.543226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.543310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.543627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.543704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.543989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.544050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.544295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.544355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.544667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.544743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.545038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.545099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.545385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.545462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.545746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.545823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.546053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.546115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.546426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.546505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.546818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.546895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.547223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.547283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.547608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.547686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.547940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.548018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.548294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.548370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.548632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.548691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.548944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.549019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.549302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.549377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.549624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.549700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.549974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.550034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.550349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.550428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.550672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.550749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.551014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.551075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.551365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.737 [2024-07-15 23:51:48.551443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.737 qpair failed and we were unable to recover it. 00:25:13.737 [2024-07-15 23:51:48.551720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.551809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.552042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.552103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.552381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.552457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.552702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.552781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.553058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.553137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.553408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.553485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.553775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.553833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.554110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.554188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.554464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.554541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.554834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.554892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.555167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.555245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.555560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.555637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.555891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.555950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.556270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.556331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.556644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.556722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.557000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.557063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.557310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.557386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.557695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.557773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.558079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.558157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.558469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.558547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.558807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.558883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.559203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.559280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.559515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.559594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.559880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.559940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.560205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.560287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.560561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.560637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.560924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.561001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.561321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.738 [2024-07-15 23:51:48.561397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.738 qpair failed and we were unable to recover it. 00:25:13.738 [2024-07-15 23:51:48.561723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.561800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.562012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.562073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.562368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.562763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.562840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.563098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.563161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.563471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.563547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.563842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.563901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.564214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.564297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.564612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.564687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.564939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.565012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.565326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.565409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.565720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.565797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.566068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.566129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.566371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.566457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.566720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.566781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.567064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.567143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.567459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.567535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.567777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.567836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.568104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.568182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.568458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.568535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.568789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.568846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.569102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.569157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.569389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.569445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.569691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.569766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.570063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.570142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.570419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.570496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.570755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.739 [2024-07-15 23:51:48.571140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.739 [2024-07-15 23:51:48.571217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.739 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.571544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.571620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.571864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.571924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.572253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.572341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.572652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.572730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.572983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.573046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.573287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.573347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.573549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.573610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.573885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.573945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.574223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.574282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.574540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.574601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.574905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.574989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.575250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.575335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.575666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.575744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.575995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.576056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.576374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.576452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.576691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.576767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.577023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.577078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.577287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.577362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.577614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.577674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.577982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.578038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.578240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.578322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.578641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.578717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.578999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.579060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.579373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.579743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.579797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.580071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.580143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.580453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.580507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.580822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.740 [2024-07-15 23:51:48.580882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.740 qpair failed and we were unable to recover it. 00:25:13.740 [2024-07-15 23:51:48.581177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.581256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.581495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.581573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.581869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.581928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.582255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.582310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.582615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.582693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.582948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.583019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.583281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.583357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.583674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.583750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.584030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.584092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.584378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.584438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.584752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.585128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.585207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.585484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.585560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.585812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.585870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.586164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.586241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.586507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.586585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.586867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.586926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.587233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.587311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.587627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.587704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.587976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.588036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.588314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.588375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.588646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.588722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.589004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.589065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.589370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.589447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.589774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.589857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.590172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.590232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.590519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.590595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.590889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.590943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.591220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.741 [2024-07-15 23:51:48.591310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.741 qpair failed and we were unable to recover it. 00:25:13.741 [2024-07-15 23:51:48.591581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.591658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.591906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.591980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.592276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.592332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.592602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.592681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.592972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.593027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.593290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.593350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.593627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.593705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.593924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.593999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.594305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.594369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.594573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.594651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.594946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.595029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.742 [2024-07-15 23:51:48.595252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.742 [2024-07-15 23:51:48.595313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.742 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.595549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.595627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.595867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.595925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.596218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.596278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.596511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.596587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.596867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.596927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.597266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.597354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.597680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.597757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.598045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.598107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.598371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.598448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.598725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.598802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.599127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.599205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.599456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.599534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.599818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.599877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.600165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.600244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.600501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.600577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.600821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.600882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.601239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.601317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.601629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.601974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.602032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.743 [2024-07-15 23:51:48.602324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.743 [2024-07-15 23:51:48.602400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.743 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.602650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.602705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.602953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.603043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.603330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.603407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.603754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.603830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.604147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.604225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.604520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.604582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.604880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.604939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.605247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.605323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.605602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.605678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.605987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.606042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.606307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.606383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.606629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.606704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.606990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.607051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.607292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.607371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.607686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.607993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.608055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.608320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.608707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.608784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.609084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.609163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.609476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.609558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.609801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.609862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.610141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.610219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.610501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.610560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.610819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.610879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.611190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.611269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.611588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.611665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.611926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.612003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.612327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.612404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.612723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.612801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.613079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.613158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.613455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.613514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.613806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.613860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.614103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.614158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.614372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.614427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.614718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.614795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.615087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.615166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.615399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.615474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.615776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.615853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.616128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.616207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.616497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.616573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.616820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.616881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.617217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.617295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.617611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.744 [2024-07-15 23:51:48.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.744 qpair failed and we were unable to recover it. 00:25:13.744 [2024-07-15 23:51:48.617901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.617978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.618235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.618313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.618626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.618702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.618987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.619050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.619297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.619375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.619595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.619674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.619921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.620000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.620324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.620402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.620624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.620703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.620991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.621054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.621332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.621410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.621737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.621793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.622045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.622397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.622483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.622739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.622814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.623141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.623197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.623443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.623522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.623797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.623873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.624167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.624247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.624558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.624636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.624931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.625006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.625323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.625378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.625647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.625723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.626011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.626074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.626389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.626466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.626782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.626868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.627166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.627244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.627571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.627648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.627886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.627946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.628248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.628326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.628550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.628630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.628894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.629200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.629276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.629543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.629620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.629867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.629928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.630241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.630320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.630631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.630686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.630950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.631028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.631279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.631357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.631625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.631701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.631999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.632061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.632382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.632437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.632640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.632714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.633010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.745 [2024-07-15 23:51:48.633071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.745 qpair failed and we were unable to recover it. 00:25:13.745 [2024-07-15 23:51:48.633359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.633437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.633716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.633793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.634073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.634151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.634392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.634468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.634745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.634805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.635023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.635083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.635297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.635356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.635589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.635648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.635867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.635926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.636207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.636295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.636528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.636604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.636873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.636931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.637211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.637288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.637559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.637636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.637919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.638009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.638338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.638414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.638726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.638802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.639119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.639480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.639556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.639805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.639867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.640195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.640275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.640585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.640663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.640921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.640993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.641282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.641360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.641647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.641709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.642006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.642067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.642365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.642442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.642711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.642788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.643086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.643142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.643399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.643478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.643760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.643835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.644091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.644169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.644440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.644519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.644766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.644825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.645143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.645220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.645486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.645564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.645835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.645896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.646173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.646251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.646507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.646584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.646837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.646896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.647240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.647321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.647606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.647681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.647935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.648010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.746 qpair failed and we were unable to recover it. 00:25:13.746 [2024-07-15 23:51:48.648289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.746 [2024-07-15 23:51:48.648366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.648638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.648714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.648989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.649045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.649335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.649412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.649742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.649823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.650041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.650103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.650363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.650428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.650636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.650691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.650887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.650941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.651208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.651293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.651565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.651643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.651890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.651949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.652223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.652277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.652520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.652576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.652809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.652864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.653138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.653217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.653455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.653533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.653750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.653809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.654085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.654163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.654414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.654490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.654726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.654785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.655088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.655166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.655471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.655547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.655801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.655862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.656170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.656226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.656462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.656538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.656766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.656826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.657099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.657178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.657483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.657559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.657838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.657896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.658165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.658243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.658553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.658633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.658856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.659208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.659288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.659538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.659622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.659831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.659890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.747 [2024-07-15 23:51:48.660191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.747 [2024-07-15 23:51:48.660270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.747 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.660541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.660617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.660861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.660922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.661186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.661264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.661577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.661653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.661895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.661983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.662244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.662321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.662623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.662677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.662926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.663003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.663274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.663350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.663623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.663711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.663985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.664045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.664317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.664394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.664712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.664788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.665043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.665104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.665421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.665500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.665743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.665828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.666093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.666154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.666466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.666520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.666806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.666883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.667203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.667281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.667553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.667629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.667886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.667945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.668246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.668323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.668544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.668605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.668831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.668890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.669186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.669264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.669575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.669651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.669882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.669943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.670276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.670355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.670638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.670716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.670933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.671013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.671247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.671327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.671612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.671688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.671949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.672026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.672250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.672311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.672562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.672620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.672851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.672918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.673214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.673293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.673606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.673683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.673930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.674006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.674258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.674334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.674596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.674673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.674899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.674969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.675209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.748 [2024-07-15 23:51:48.675287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.748 qpair failed and we were unable to recover it. 00:25:13.748 [2024-07-15 23:51:48.675592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.675669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.675915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.675991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.676213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.676289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.676543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.676618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.676895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.677274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.677359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.677677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.677753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.678082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.678144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.678413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.678488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.678800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.678876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.679203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.679287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.679624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.679682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.679912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.679987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.680266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.680343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.680616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.680691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.680914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.680980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.681206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.681283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.681547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.681624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.681835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.681897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.682161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.682241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.682513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.682592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.682833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.682894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.683172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.683251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.683580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.683634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.683943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.684026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.684287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.684365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.684644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.684721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.684973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.685034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.685309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.685387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.685642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.685720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.685978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.686039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.686355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.686409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.686493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b50e0 (9): Bad file descriptor 00:25:13.749 [2024-07-15 23:51:48.686904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.687009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.687309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.687375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.687655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.687724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.687949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.688045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.688303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.688367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.688663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.688719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.689026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.749 [2024-07-15 23:51:48.689086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.749 qpair failed and we were unable to recover it. 00:25:13.749 [2024-07-15 23:51:48.689336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.689400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.689632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.689699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.689995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.690059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.690312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.690372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.690628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.690692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.691029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.691089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.691416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.691482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.691771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.692109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.692170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.692450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.692515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.692798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.692863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.693108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.693172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.693414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.693482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.693763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.693829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.694119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.694180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.694465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.694533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.694830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.694895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.695167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.695229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.695483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.695538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.695787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.695863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.696127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.696188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.696437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.696502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.696712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.696778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.697072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.697137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.697400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.697467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.697689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.697754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.698051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.698109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.698334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.698403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.698689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.750 [2024-07-15 23:51:48.698745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.750 qpair failed and we were unable to recover it. 00:25:13.750 [2024-07-15 23:51:48.699014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.699070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.699300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.699384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.699611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.699676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.699987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.700067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.700378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.700439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.700654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.700718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.701011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.701070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.701345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.701409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.701703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.701770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.702066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.702124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.702430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.702494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.702777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.702833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.703059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.703121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.703368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.703436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.703718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.703784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.704072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.704133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.704448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.704512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.704801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.704867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.705126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.705186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.705451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.705516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.705805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.705870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.706173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.706234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.706468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.706536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.706773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.706838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.707115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.707177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.707417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.707482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.707787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.707851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.708108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.708172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.751 [2024-07-15 23:51:48.708458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.751 [2024-07-15 23:51:48.708514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.751 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.708726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.708806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.709063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.709134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.709399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.709464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.709697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.709765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.710056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.710122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.710399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.710464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.710747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.710812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.711136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.711192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.711381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.711438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.711690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.711757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.712015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.712080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.712361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.712417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.712638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.712704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.712954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.713035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.713346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.713412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.713709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.714049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.714117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.714419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.714484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.714797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.714863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.715158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.715214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.715473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.752 [2024-07-15 23:51:48.715538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.752 qpair failed and we were unable to recover it. 00:25:13.752 [2024-07-15 23:51:48.715807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.715872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.716147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.716214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.716489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.716553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.716837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.716901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.717157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.717225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.717487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.717553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.717828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.717893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.718150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.718216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.718480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.718545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.718819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.718884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.719214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.719270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.719551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.719616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.719930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.720014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.720296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.720361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.720675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.720739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.721043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.721100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.721356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.721422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.721688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.721755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.722007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.722073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.722337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.722402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.722674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.722748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.723030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.723097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.723372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.723439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.723730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.723786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.724091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.724157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.724463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.724527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.724768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.724836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.725099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.725166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.725469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.753 [2024-07-15 23:51:48.725534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.753 qpair failed and we were unable to recover it. 00:25:13.753 [2024-07-15 23:51:48.725810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.725875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.726173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.726239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.726516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.726584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.726891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.726969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.727230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.727296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.727622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.727687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.727924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.728042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.728364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.728431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.728744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.728809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.729040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.729107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.729372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.729437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.729717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.729781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.730045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.730111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.730400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.730464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.730734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.730799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.731099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.731165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.731431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.731495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.731802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.732117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.732185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.732465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.732520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.732811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.732876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.733155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.733222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.733524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.733588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.733816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.733883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.734218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.734284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.734541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.734607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.734921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.735004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.735271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.735335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.754 qpair failed and we were unable to recover it. 00:25:13.754 [2024-07-15 23:51:48.735649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.754 [2024-07-15 23:51:48.735713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.736027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.736083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.736372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.736437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.736712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.736786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.737063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.737130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.737405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.737469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.737780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.737835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.738149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.738215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.738541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.738605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.738874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.738939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.739226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.739294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.739570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.739636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.739941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.740023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.740304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.740372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.740659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.740715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.740947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.741019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.741287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.741353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.741644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.741711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.741943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.742167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.742485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.742550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.742850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.742905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.743195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.743260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.743564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.743628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.743865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.743929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.744268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.744333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.744604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.744670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.744987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.745055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.745341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.745396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.745661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.745726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.746003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.746071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.755 qpair failed and we were unable to recover it. 00:25:13.755 [2024-07-15 23:51:48.746394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.755 [2024-07-15 23:51:48.746450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.746713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.746779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.747052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.747118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.747370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.747435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.747708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.747776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.748014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.748081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.748344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.748409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.748686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.748750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.749009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.749076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.749348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.749411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.749716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.749781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.750047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.750116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.750383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.750451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.750724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.750800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.751067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.751133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.751374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.751439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.751705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.751772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.752046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.752113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.752377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.752444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.752713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.752778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.753079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.753144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.753445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.753775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.753840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.754156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.754213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.754483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.754548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.754808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.754873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.755207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.755274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.755592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.755657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.755932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.756014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.756 [2024-07-15 23:51:48.756270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.756 [2024-07-15 23:51:48.756337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.756 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.756650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.756715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.757021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.757087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.757361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.757425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.757715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.757780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.758056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.758124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.758405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.758469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.758743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.758807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.759053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.759122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.759395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.759463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.759726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.759791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.760111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.760177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.760452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.760519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.760756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.760824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.761117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.761183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.761487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.761552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.761822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.761891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.762202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.762269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.762577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.762641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.762952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.763033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.763346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.763410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.763677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.763742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.764023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.764089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.764372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.764436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.764708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.764786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.765101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.765158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.765412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.765476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.765749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.765817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.766056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.766125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.757 qpair failed and we were unable to recover it. 00:25:13.757 [2024-07-15 23:51:48.766364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.757 [2024-07-15 23:51:48.766431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.766700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.766765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.767036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.767103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.767381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.767446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.767712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.767779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.768066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.768134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.768409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.768473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.768761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.768827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.769076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.769143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.769433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.769499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.769778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.769846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.770167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.770233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.770537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.770603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.770879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.770944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.771239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.771304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.771583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.771648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.771886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.771988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.772289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.772355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.772663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.772728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.773008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.773076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.773385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.773451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.773773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.773838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.758 [2024-07-15 23:51:48.774180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.758 [2024-07-15 23:51:48.774247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.758 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.774550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.774615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.774889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.774968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.775245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.775310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.775566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.775630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.775931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.776010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.776229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.776295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.776557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.776623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.776941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.777024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.777291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.777357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.777665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.777731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.778055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.778121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.778363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.778430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.778645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.778720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.779025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.779092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.779370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.779745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.779811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.780133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.780200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.780446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.780513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.780797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.780865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.781162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.781229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.781496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.781560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.781878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.781943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.782279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.782344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.782606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.782988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.783055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.783358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.783423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.783710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.783775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.784091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.784156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.784432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.784499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.759 qpair failed and we were unable to recover it. 00:25:13.759 [2024-07-15 23:51:48.784803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.759 [2024-07-15 23:51:48.784868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.785173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.785238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.785503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.785567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.785810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.785874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.786193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.786260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.786538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.786603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.786866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.786932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.787219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.787520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.787587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.787857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.787925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.788292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.788358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.788662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.788727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.789031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.789097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.789362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.789427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.789725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.789789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.790026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.790092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.790360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.790425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.790726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.790790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.791113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.791179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.791471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.791536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.791774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.791838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.792135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.792200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.792515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.792580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.792887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.792979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.793260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.793324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.793606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.793671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.793898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.793980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.794297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.794363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.794642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.794708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.795019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.795085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.795404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.795748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.795813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.796097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.796164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.796421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.796486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.796752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.796819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.797092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.797423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.760 [2024-07-15 23:51:48.797489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.760 qpair failed and we were unable to recover it. 00:25:13.760 [2024-07-15 23:51:48.797811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.797875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.798206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.798272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.798574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.798639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.798977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.799043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.799267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.799332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.799584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.799648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.799898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.799994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.800307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.800372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.800678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.800743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.801052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.801119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.801387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.801452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.801756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.801821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.802100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.802165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.802450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.802516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.802815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.802881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.803133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.803200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.803445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.803510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.803785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.803850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.804142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.804207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.804476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.804543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.804859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.804925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.805213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.805279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.805549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.805617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.805925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.806008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.806251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.806317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.806548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.806615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.806921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.807012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.807302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.807367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.807631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.807696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.808000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.808068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.808394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.808460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.808764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.808829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.809122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.809188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.809494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.809559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.809782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.809848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.810142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.810211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.810520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.810586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.810894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.810971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.811256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.811324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.811625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.811690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.812000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.812067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.812381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.812447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.812718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.812786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.813064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.813131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.813375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.813440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.813705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.761 [2024-07-15 23:51:48.813772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.761 qpair failed and we were unable to recover it. 00:25:13.761 [2024-07-15 23:51:48.814091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.814158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.814470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.814536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.814775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.814839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.815110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.815174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.815460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.815523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.815792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.815857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.816151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.816220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.816491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.816568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.816841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.816906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.817210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.817319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.817616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.817687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.818012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.818081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.818354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.818421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.818730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.818795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.819081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.819148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.819461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.819531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.819811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.819876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.820171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.820248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.820553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.820619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.820897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.820975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.821255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.821323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.821609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.821676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.821949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.822034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.822310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.822376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.822646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.822712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.823045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.823112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.823428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.823493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.823769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.823834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.824145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.824212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.824524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.824589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.824911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.824991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.825273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.825341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.825617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.825682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.825987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.826062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.826365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.826430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.826705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.826770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.827048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.827115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.827377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.827442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.827715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.827779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.828003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.828070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.828335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.828400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.828650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.828937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.829015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.829332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.829402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.762 [2024-07-15 23:51:48.829670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.762 [2024-07-15 23:51:48.829735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.762 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.830056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.830122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.830389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.830456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.830731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.830806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.831120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.831188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.831468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.831533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.831782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.831847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.832200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.832268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.832536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.832602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.832896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.832975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.833257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.833630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.833700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.833981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.834047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.834360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.834432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.834739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.834804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.835076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.835142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.835376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.835443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.835731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.835796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.836110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.836175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.836491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.836558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.836865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.836929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.837215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.837279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.837549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.837614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.837883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.837949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.838268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.838344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.838581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.838649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.838888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.838953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.839262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.839331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.839647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.839712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:13.763 [2024-07-15 23:51:48.840011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.763 [2024-07-15 23:51:48.840079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:13.763 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.840405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.840477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.840739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.840807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.841108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.841506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.841571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.841847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.841915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.842223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.842289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.842605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.842676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.842995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.843061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.843337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.843404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.843690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.843755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.844074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.844141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.844434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.844823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.844888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.845192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.845281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.845554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.845621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.845905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.845988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.846272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.846339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.846615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.846681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.846989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.847062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.847326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.847391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.847658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.847725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.848036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.848103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.848379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.848445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.034 [2024-07-15 23:51:48.848669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.034 [2024-07-15 23:51:48.848735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.034 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.849042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.849118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.849338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.849402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.849701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.849766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.850039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.850105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.850368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.850433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.850706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.850773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.851055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.851122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.851394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.851459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.851726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.851792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.852110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.852174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.852449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.852514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.852739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.852805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.853022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.853087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.853402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.853471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.853778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.853842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.854176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.854243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.854569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.854634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.854907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.854992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.855279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.855345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.855643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.855709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.856020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.856093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.856410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.856484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.856796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.857138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.857203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.857473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.857540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.857824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.857889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.858175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.858243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.858526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.858590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.858817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.858883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.859204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.859288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.859559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.859625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.859896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.859978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.860298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.860364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.860679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.860744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.861028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.861097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.861430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.861497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.861781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.861846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.862141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.862218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.862504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.862568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.862850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.035 [2024-07-15 23:51:48.862914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.035 qpair failed and we were unable to recover it. 00:25:14.035 [2024-07-15 23:51:48.863235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.863306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.863636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.863701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.864010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.864085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.864424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.864489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.864804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.864873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.865153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.865221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.865543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.865608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.865876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.865941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.866267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.866337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.866626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.866692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.866999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.867066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.867347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.867414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.867694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.867759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.868025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.868091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.868412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.868476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.868764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.868829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.869107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.869173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.869456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.869521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.869840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.869904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.870224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.870297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.870606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.870671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.870987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.871053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.871374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.871439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.871718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.871783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.872064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.872131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.872448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.872513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.872755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.872820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.873123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.873195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.873463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.873527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.873803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.873879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.874185] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.874253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.874498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.874562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.874835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.874898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.875180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.875247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.875517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.875582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.875891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.875989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.876259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.876323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.876595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.876660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.876988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.877054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.036 qpair failed and we were unable to recover it. 00:25:14.036 [2024-07-15 23:51:48.877364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.036 [2024-07-15 23:51:48.877435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.877753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.877817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.878119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.878195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.878522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.878585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.878905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.878988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.879302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.879367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.879683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.879752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.880042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.880109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.880391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.880455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.880724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.880791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.881059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.881125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.881411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.881476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.881702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.881767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.882046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.882111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.882345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.882410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.882643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.882707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.882981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.883046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.883337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.883402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.883666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.883733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.884003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.884071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.884378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.884447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.884732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.884797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.885104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.885177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.885422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.885489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.885729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.885795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.886069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.886142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.886417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.886481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.886757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.886821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.887100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.887165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.887441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.887506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.887784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.887860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.888200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.888267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.888538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.888603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.888847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.888915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.889215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.889600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.889923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.890005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.890283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.890348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.890624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.037 [2024-07-15 23:51:48.890689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.037 qpair failed and we were unable to recover it. 00:25:14.037 [2024-07-15 23:51:48.890969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.891035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.891312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.891378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.891656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.891719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.891990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.892056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.892321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.892386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.892680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.892744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.893020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.893085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.893388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.893463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.893780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.893844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.894103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.894408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.894473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.894791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.894861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.895113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.895181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.895470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.895535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.895770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.895834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.896141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.896207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.896513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.896587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.896831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.896896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.897165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.897232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.897502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.897567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.897876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.897946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.898274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.898339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.898648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.898719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.898982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.899049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.899364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.899434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.899708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.899773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.900024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.900092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.900383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.900460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.900728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.900795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.901079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.901146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.901426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.901491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.038 qpair failed and we were unable to recover it. 00:25:14.038 [2024-07-15 23:51:48.901752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.038 [2024-07-15 23:51:48.901829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.902103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.902169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.902485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.902551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.902822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.902886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.903175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.903244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.903496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.903562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.903837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.904206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.904539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.904603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.904913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.904995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.905291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.905355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.905671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.905735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.906005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.906072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.906379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.906444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.906730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.906795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.907071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.907138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.907439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.907514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.907764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.907830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.908119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.908185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.908453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.908517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.908768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.908833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.909060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.909126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.909422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.909497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.909767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.909832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.910139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.910214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.910541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.910605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.910859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.910927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.911216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.911282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.911543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.911608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.911913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.912002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.912281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.912346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.912620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.912685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.912967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.913032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.913293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.913357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.913662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.913732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.913989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.914056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.914297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.914361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.914587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.914652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.914864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.914928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.039 qpair failed and we were unable to recover it. 00:25:14.039 [2024-07-15 23:51:48.915203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.039 [2024-07-15 23:51:48.915268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.915504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.915584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.915876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.915941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.916238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.916303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.916569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.916632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.916904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.916995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.917221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.917286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.917524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.917592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.917872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.917937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.918267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.918331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.918577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.918642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.918905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.918987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.919270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.919335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.919657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.919723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.919996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.920372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.920437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.920720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.920786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.921093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.921166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.921475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.921539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.921817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.921881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.922134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.922201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.922431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.922497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.922753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.922819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.923064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.923134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.923383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.923449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.923759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.923828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.924100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.924166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.924477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.924547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.924861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.924927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.925264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.925333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.925640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.925705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.925965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.926031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.926339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.926404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.926707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.926773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.927084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.927160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.927478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.927544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.927843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.927907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.928274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.928375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.928657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.040 [2024-07-15 23:51:48.928726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.040 qpair failed and we were unable to recover it. 00:25:14.040 [2024-07-15 23:51:48.929035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.929114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.929363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.929429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.929697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.929776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.930094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.930160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.930420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.930485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.930765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.930832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.931106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.931172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.931493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.931843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.931918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.932249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.932315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.932589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.932655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.932976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.933044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.933360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.933426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.933740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.933817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.934070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.934137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.934403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.934470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.934757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.934823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.935093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.935159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.935428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.935493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.935754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.935818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.936051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.936118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.936358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.936426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.936693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.936759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.937052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.937119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.937387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.937455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.937777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.937843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.938116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.938183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.938461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.938527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.938798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.938863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.939166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.939234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.939541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.939608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.939896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.939978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.940274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.041 [2024-07-15 23:51:48.940339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.041 qpair failed and we were unable to recover it. 00:25:14.041 [2024-07-15 23:51:48.940611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.940678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.940950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.941059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.941385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.941450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.941755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.942125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.942194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.942430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.942506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.942782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.942850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.943139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.943219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.943486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.943554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.943797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.943878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.944164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.944230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.944498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.944563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.944806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.945121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.945187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.945488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.945563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.945857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.945923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.946247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.946320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.946647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.946712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.947003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.947070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.947375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.947441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.947740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.947815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.948086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.948153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.948427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.948494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.948817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.948883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.949223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.949519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.949584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.949859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.949927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.950264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.950333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.950613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.950681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.950997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.951064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.951340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.951650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.951715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.951941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.952032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.952319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.952386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.952663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.953006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.042 [2024-07-15 23:51:48.953073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.042 qpair failed and we were unable to recover it. 00:25:14.042 [2024-07-15 23:51:48.953399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.953465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.953743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.953808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.954044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.954112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.954393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.954459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.954760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.954826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.955110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.955177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.955426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.955491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.955794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.955869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.956111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.956181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.956485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.956562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.956891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.956972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.957244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.957311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.957624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.957697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.957996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.958075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.958315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.958385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.958675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.959008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.959075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.959327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.959395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.959714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.959787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.960032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.960101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.960369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.960435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.960705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.960772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.961050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.961118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.961418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.961485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.043 [2024-07-15 23:51:48.961780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.043 [2024-07-15 23:51:48.961845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.043 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.962084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.962154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.962425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.962491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.962788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.962855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.963161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.963228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.963499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.963564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.963776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.963840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.964112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.964179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.964484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.964560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.964823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.964888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.965189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.965266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.965545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.965609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.965872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.966262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.966337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.966629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.966695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.966976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.967343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.967420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.967739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.967804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.968048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.968117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.968426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.968501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.968784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.968849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.969172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.969239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.969551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.969616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.969848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.969913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.970207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.970272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.970521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.970586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.970832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.970902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.971176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.971244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.971522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.971588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.971892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.971990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.972266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.972331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.972608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.972675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.972941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.973024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.973279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.973344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.973660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.973725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.973996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.044 [2024-07-15 23:51:48.974064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.044 qpair failed and we were unable to recover it. 00:25:14.044 [2024-07-15 23:51:48.974340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.974407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.974662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.974729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.975041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.975107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.975405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.975470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.975756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.975822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.976119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.976187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.976483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.976549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.976839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.976906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.977192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.977260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.977578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.977645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.977898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.977979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.978253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.978318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.978589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.978657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.978879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.978948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.979237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.979307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.979620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.979686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.979926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.980017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.980331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.980397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.980696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.980762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.981008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.981074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.981372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.981438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.981709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.981779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.982066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.982135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.982421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.982487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.982807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.982873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.983198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.983526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.983592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.983848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.983916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.984242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.984314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.984597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.984665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.984922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.985030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.985348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.985414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.985653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.985728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.986002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.045 [2024-07-15 23:51:48.986088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.045 qpair failed and we were unable to recover it. 00:25:14.045 [2024-07-15 23:51:48.986350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.986419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.986707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.986774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.987083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.987151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.987419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.987485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.987729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.987799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.988089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.988157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.988475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.988540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.988847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.988913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.989237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.989304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.989577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.989914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.989999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.990280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.990346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.990632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.990697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.990975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.991045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.991282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.991348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.991659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.991725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.991999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.992067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.992359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.992425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.992699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.992765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.993034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.993103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.993427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.993494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.993733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.993802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.994073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.994141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.994449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.994515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.994786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.994853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.995094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.995432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.995510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.995792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.995859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.996149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.996217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.996481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.996550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.996872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.996939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.997254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.997320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.997595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.997661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.997917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.046 [2024-07-15 23:51:48.998002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.046 qpair failed and we were unable to recover it. 00:25:14.046 [2024-07-15 23:51:48.998273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:48.998339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:48.998646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:48.998712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:48.998982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:48.999052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:48.999328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:48.999395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:48.999701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:48.999768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.000012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.000081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.000361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.000428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.000697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.000763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.001074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.001143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.001431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.001497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.001827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.001895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.002196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.002264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.002565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.002631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.002935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.003016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.003253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.003322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.003586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.003655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.003896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.003976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.004291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.004358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.004680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.004746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.005018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.005088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.005326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.005393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.005701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.005768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.006075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.006141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.006428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.006495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.006762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.006828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.007082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.007437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.007503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.007748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.007817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.008072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.008141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.008390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.008457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.008772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.008838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.009149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.009217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.009524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.009600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.009878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.009947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.047 qpair failed and we were unable to recover it. 00:25:14.047 [2024-07-15 23:51:49.010275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.047 [2024-07-15 23:51:49.010342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.010659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.010725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.010985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.011054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.011329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.011396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.011710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.011776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.012065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.012133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.012390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.012457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.012723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.012789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.013067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.013135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.013373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.013440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.013725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.013791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.014090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.014157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.014435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.014503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.014814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.014880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.015164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.015231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.015539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.015605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.015891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.015973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.016243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.016310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.016628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.016694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.017006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.017076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.017372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.017438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.017696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.017762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.018013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.018082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.018356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.018424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.018733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.018800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.048 [2024-07-15 23:51:49.019088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.048 [2024-07-15 23:51:49.019156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.048 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.019393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.019459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.019684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.019752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.020043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.020110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.020337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.020404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.020699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.020764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.021011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.021078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.021351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.021418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.021688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.021757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.022078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.022147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.022445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.022512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.022763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.022829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.023113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.023183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.023457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.023843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.023909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.024226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.024608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.024675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.024912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.024998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.025238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.025307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.025530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.025920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.026305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.026372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.026667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.026732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.026991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.027061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.027337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.027404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.027670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.027736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.028001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.028070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.028341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.028408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.028718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.028785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.029070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.029138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.029426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.029492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.029799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.029864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.030162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.030231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.030518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.030584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.030840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.030905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.031195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.031266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.031574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-15 23:51:49.031641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.049 qpair failed and we were unable to recover it. 00:25:14.049 [2024-07-15 23:51:49.031882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.031951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.032238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.032306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.032585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.032651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.032917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.033015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.033294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.033361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.033607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.033674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.033903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.033990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.034305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.034371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.034676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.034744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.035003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.035072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.035392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.035459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.035731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.035798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.036101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.036170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.036491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.036557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.036874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.036941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.037240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.037305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.037616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.037693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.038028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.038097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.038407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.038474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.038796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.038863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.039160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.039227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.039551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.039618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.039884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.039951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.040244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.040312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.040586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.040651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.040938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.041038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.041321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.041389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.041723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.041790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.042068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.042136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.042451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.042518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.042841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.042908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.043207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.043273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.043549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.043616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.043925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.044010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.044264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.044330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.044643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.044710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.050 [2024-07-15 23:51:49.045039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.050 [2024-07-15 23:51:49.045107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.050 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.045431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.045497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.045804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.045869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.046204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.046272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.046577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.046643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.046916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.046997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.047277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.047344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.047644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.047712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.048038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.048107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.048364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.048431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.048751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.048816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.049100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.049168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.049441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.049511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.049746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.049813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.050122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.050190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.050516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.050581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.050894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.050974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.051254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.051323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.051569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.051637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.051949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.052029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.052333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.052409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.052734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.053074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.053141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.053425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.053491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.053767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.053834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.054152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.054219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.054490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.054845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.054912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.055208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.055275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.055553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.055619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.055902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.056294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.056361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.056676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.056742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.057004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.057074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.057361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.057428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.057735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.057800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.058036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.058104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.051 [2024-07-15 23:51:49.058383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.051 [2024-07-15 23:51:49.058449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.051 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.058750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.058815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.059103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.059171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.059445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.059511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.059820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.059886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.060179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.060248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.060566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.060908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.060992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.061322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.061388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.061658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.061723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.061989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.062058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.062279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.062345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.062629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.062695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.062918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.063004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.063314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.063381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.063691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.063757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.064035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.064104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.064345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.064411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.064686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.064751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.065025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.065096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.065368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.065706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.065774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.066058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.066126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.066373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.066448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.066764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.066830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.067118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.067183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.067500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.067561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.067874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.067935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.068253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.068316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.068620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.068682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.068905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.068980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.069257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.069319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.069580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.069641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.069909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.069989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.070264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.070327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.070631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.071001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.071066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.071346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.052 [2024-07-15 23:51:49.071412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.052 qpair failed and we were unable to recover it. 00:25:14.052 [2024-07-15 23:51:49.071682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.071744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.072055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.072120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.072411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.072474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.072747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.072812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.073132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.073197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.073472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.073830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.073896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.074199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.074267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.074539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.074607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.074880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.074946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.075275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.075343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.075608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.075675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.075989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.076056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.076366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.076433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.076697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.076763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.077017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.077086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.077355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.077422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.077712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.077777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.078043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.078111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.078385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.078454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.078737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.078804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.079095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.079164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.079435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.079500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.079777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.079843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.080106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.080175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.080476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.080553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.080855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.080921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.081211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.081278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.081591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.081656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.081972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.082040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.082313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.082381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.082690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.053 [2024-07-15 23:51:49.082756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.053 qpair failed and we were unable to recover it. 00:25:14.053 [2024-07-15 23:51:49.083040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.083109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.083430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.083495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.083807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.083873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.084195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.084264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.084538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.084604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.084912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.085010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.085315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.085381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.085656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.085724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.086032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.086101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.086365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.086716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.086782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.087031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.087099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.087409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.087475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.087728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.088106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.088482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.088548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.088776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.088842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.089119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.089186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.089460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.089808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.089874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.090177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.090245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.090517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.090585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.090828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.090893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.091192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.091260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.091553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.091619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.091929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.092010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.092322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.092388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.092695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.092761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.093034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.093102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.093370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.093435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.093702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.093769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.094077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.094145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.094425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.094490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.094790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.094866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.095200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.054 [2024-07-15 23:51:49.095266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.054 qpair failed and we were unable to recover it. 00:25:14.054 [2024-07-15 23:51:49.095538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.095604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.095876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.095942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.096200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.096266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.096533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.096599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.096831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.096900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.097208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.097276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.097582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.097648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.097910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.097993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.098244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.098310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.098592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.098658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.098947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.099031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.099296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.099365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.099649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.099715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.100023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.100093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.100377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.100442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.100709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.100776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.100991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.101059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.101377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.101443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.101756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.101823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.102129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.102197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.102424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.102494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.102772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.102837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.103159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.103227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.103511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.103578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.103842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.103909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.104178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.104247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.104557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.104624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.104944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.105041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.105350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.105688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.105753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.106058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.106126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.106412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.106479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.106801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.106867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.107154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.107510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.107579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.107855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.107920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.108244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.108310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.108616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.055 [2024-07-15 23:51:49.108681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.055 qpair failed and we were unable to recover it. 00:25:14.055 [2024-07-15 23:51:49.108983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.109060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.109339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.109406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.109710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.109775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.110054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.110121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.110391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.110459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.110770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.110835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.111154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.111222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.111539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.111604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.111911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.111991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.112278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.112344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.112616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.112683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.113003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.113071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.113350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.113415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.113686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.113753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.114061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.114129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.114416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.114481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.114752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.114819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.115141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.115209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.115516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.115583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.115892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.116293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.116359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.116669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.116735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.117046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.117117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.117404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.117471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.117738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.117805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.118073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.118143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.118455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.118524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.118782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.118849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.119186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.119254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.119564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.119631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.119922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.120003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.120317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.120384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.120688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.120754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.121001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.121069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.121294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.121360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.121600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.121667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.056 [2024-07-15 23:51:49.121985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.056 [2024-07-15 23:51:49.122053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.056 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.122368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.122435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.122710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.122778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.123041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.123109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.123379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.123456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.123730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.123799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.124099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.124166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.124433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.124501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.124821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.124888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.125183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.125251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.125558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.125624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.125908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.125989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.126298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.126364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.126641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.126707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.126998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.127067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.127337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.127405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.127732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.127799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.128076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.128143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.128428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.128496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.128806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.128873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.129209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.129278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.129590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.129656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.129981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.130049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.130357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.130424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.130695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.130760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.131038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.131106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.131340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.131407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.131713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.131778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.132085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.132152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.132468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.132534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.132807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.132873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.133157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.133225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.133534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.133601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.133837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.133904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.134202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.134272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.134581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.134649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.134978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.135046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.135361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.135428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.135701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.057 [2024-07-15 23:51:49.135767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.057 qpair failed and we were unable to recover it. 00:25:14.057 [2024-07-15 23:51:49.136093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.136161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.136475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.136541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.136783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.136850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.137186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.137254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.137562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.137628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.137894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.137985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.138243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.138312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.138628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.138984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.139052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.139322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.139389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.139666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.139732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.139946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.140029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.140279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.140345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.140657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.140722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.141027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.141095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.141361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.141427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.141742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.141807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.142120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.142187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.142410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.142479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.142795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.142862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.143104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.143171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.143414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.143482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.143718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.143784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.144051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.144146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.144424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.144492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.144795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.144861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.145184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.145251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.145497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.145567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.145881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.145946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.146271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.146338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.146602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.146668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.146950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.058 [2024-07-15 23:51:49.147030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.058 qpair failed and we were unable to recover it. 00:25:14.058 [2024-07-15 23:51:49.147296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.059 [2024-07-15 23:51:49.147364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.059 qpair failed and we were unable to recover it. 00:25:14.059 [2024-07-15 23:51:49.147619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.059 [2024-07-15 23:51:49.147685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.059 qpair failed and we were unable to recover it. 00:25:14.327 [2024-07-15 23:51:49.147968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.327 [2024-07-15 23:51:49.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.327 qpair failed and we were unable to recover it. 00:25:14.327 [2024-07-15 23:51:49.148289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.327 [2024-07-15 23:51:49.148357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.327 qpair failed and we were unable to recover it. 00:25:14.327 [2024-07-15 23:51:49.148623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.327 [2024-07-15 23:51:49.148690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.327 qpair failed and we were unable to recover it. 00:25:14.327 [2024-07-15 23:51:49.148928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.327 [2024-07-15 23:51:49.149013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.327 qpair failed and we were unable to recover it. 00:25:14.327 [2024-07-15 23:51:49.149311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.327 [2024-07-15 23:51:49.149378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.149625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.149691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.150005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.150074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.150349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.150419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.150689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.150757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.151049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.151116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.151399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.151465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.151738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.151818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.152070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.152139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.152381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.152448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.152715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.152782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.153085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.153153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.153431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.153498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.153775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.153841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.154131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.154199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.154512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.154578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.154866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.154932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.155260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.155327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.155610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.155678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.155973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.156040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.156301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.156367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.156656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.156723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.156997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.157067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.157357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.157733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.157800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.158082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.158151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.328 [2024-07-15 23:51:49.158381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.328 [2024-07-15 23:51:49.158451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.328 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.158739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.158806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.159088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.159156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.159427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.159496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.159762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.159829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.160077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.160145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.160425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.160492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.160774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.160840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.161159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.161227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.161499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.161568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.161839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.161908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.162232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.162300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.162566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.162635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.162903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.162990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.163267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.163336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.163650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.163718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.164048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.164115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.164373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.164439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.164718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.164786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.165037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.165409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.165477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.165756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.165834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.166111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.166182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.166480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.166847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.166913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.167194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.167261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.167507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.167574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.167884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.329 [2024-07-15 23:51:49.167950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.329 qpair failed and we were unable to recover it. 00:25:14.329 [2024-07-15 23:51:49.168200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.168268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.168506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.168573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.168852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.168919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.169232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.169300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.169566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.169632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.169935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.170022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.170301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.170369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.170657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.170725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.171000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.171069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.171384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.171450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.171722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.171787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.172100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.172169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.172438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.172504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.172772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.172841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.173082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.173158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.173433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.173499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.173816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.173882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.174138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.174207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.174453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.174519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.174796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.174862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.175187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.175263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.175562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.175628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.175899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.175980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.176259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.176325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.176596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.176664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.176930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.177013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.177252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.177317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.177584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.177652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.330 qpair failed and we were unable to recover it. 00:25:14.330 [2024-07-15 23:51:49.177931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.330 [2024-07-15 23:51:49.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.178342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.178409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.178691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.178756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.179048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.179116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.179352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.179420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.179701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.179769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.180088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.180157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.180429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.180495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.180802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.180868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.181159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.181227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.181555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.181622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.181936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.182017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.182321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.182664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.182731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.183015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.183083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.183357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.183423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.183739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.183804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.184086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.184426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.184494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.184829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.184896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.185165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.185233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.185468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.185540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.185855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.185922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.186194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.186261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.186527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.186898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.186984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.187294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.187361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.331 [2024-07-15 23:51:49.187681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.331 [2024-07-15 23:51:49.187748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.331 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.188031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.188099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.188372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.188440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.188722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.188792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.189066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.189136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.189449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.189526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.189840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.189906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.190242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.190344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.190648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.190717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.190997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.191066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.191302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.191368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.191628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.191694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.191985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.192053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.192317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.192382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.192703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.192769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.193046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.193113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.193428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.193492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.193799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.193865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.194151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.194218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.194508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.194574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.194883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.194948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.195251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.195317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.195532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.195597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.195875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.195940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.196241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.196306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.196616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.332 [2024-07-15 23:51:49.196680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.332 qpair failed and we were unable to recover it. 00:25:14.332 [2024-07-15 23:51:49.196969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.197035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.197298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.197363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.197631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.197696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.197990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.198057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.198369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.198434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.198742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.198806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.199079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.199402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.199466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.199677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.199741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.200019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.200085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.200365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.200430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.200741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.200806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.201076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.201142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.201382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.201447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.201753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.201817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.202114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.202180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.202402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.202468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.202773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.202839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.203107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.203173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.203444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.203509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.203833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.203900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.204190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.204256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.204527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.204592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.204821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.204886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.205171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.205241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.205555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.333 [2024-07-15 23:51:49.205620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.333 qpair failed and we were unable to recover it. 00:25:14.333 [2024-07-15 23:51:49.205935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.206016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.206256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.206324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.206598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.206665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.207005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.207073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.207353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.207420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.207737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.207802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.208119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.208187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.208429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.208498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.208838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.208905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.209236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.209302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.209579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.209644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.209920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.210003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.210268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.210333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.210602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.210667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.210903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.210986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.211257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.211323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.211598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.211663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.211951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.212057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.334 qpair failed and we were unable to recover it. 00:25:14.334 [2024-07-15 23:51:49.212327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.334 [2024-07-15 23:51:49.212391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.212628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.212696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.212987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.213054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.213361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.213437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.213767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.213833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.214140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.214207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.214492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.214556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.214855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.214921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.215227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.215293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.215594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.215658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.215984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.216052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.216354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.216419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.216733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.216797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.217023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.217092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.217364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.217429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.217705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.217771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.218072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.218138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.218468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.218533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.218839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.218904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.219186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.219253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.219534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.219598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.219868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.219932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.220255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.220321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.220554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.220618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.220907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.220985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.221263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.221328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.221642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.221706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.221987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.222054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.335 qpair failed and we were unable to recover it. 00:25:14.335 [2024-07-15 23:51:49.222329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.335 [2024-07-15 23:51:49.222393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.222718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.222783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.223097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.223173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.223447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.223515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.223838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.223904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.224236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.224303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.224582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.224662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.224985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.225063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.225371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.225437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.225747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.225813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.226088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.226155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.226429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.226495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.226781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.226845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.227156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.227222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.227490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.227835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.227899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.228195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.228261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.228585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.228651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.228986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.229054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.229274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.229340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.229642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.229706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.229990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.230057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.230379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.230444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.230713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.230777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.231086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.231152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.231416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.231481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.231786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.231857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.232189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.336 qpair failed and we were unable to recover it. 00:25:14.336 [2024-07-15 23:51:49.232564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.336 [2024-07-15 23:51:49.232628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.232839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.232901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.233235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.233303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.233578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.233642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.233949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.234033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.234301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.234369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.234693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.234758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.235037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.235104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.235359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.235425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.235682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.235747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.236029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.236095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.236395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.236459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.236773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.236838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.237137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.237203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.237523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.237588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.237832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.237909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.238207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.238274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.238588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.238653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.238922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.239006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.239273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.239338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.239646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.239710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.239977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.240044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.240305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.240369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.240681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.240746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.241001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.241068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.241347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.241412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.241672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.241737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.242014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.337 [2024-07-15 23:51:49.242081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.337 qpair failed and we were unable to recover it. 00:25:14.337 [2024-07-15 23:51:49.242357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.242421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.242742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.242807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.243095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.243161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.243442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.243507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.243821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.243887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.244215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.244280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.244552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.244617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.244879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.244944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.245273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.245338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.245663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.245728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.246046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.246113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.246377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.246442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.246744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.246809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.247120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.247187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.247510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.247584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.247859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.247927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.248220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.248285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.248587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.248652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.248928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.249007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.249277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.249342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.249639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.249711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.249982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.250051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.250340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.250719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.250783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.251052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.251120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.251394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.251459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.251765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.251829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.338 [2024-07-15 23:51:49.252135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.338 [2024-07-15 23:51:49.252202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.338 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.252548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.252615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.252886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.252951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.253241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.253309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.253587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.253651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.253930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.254009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.254285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.254350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.254658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.254723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.255051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.255119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.255434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.255500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.255804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.255868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.256128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.256523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.256877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.256942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.257280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.257346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.257665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.257731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.257997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.258064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.258303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.258371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.258677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.258742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.259026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.259095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.259392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.259460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.259749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.260098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.260166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.260474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.260539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.260809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.260874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.261199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.261267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.261578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.261643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.261924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.262007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.339 [2024-07-15 23:51:49.262280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.339 [2024-07-15 23:51:49.262355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.339 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.262651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.262716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.262992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.263059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.263368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.263433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.263736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.263802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.264111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.264177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.264495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.264560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.264851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.264919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.265218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.265284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.265567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.265632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.265898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.265977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.266278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.266344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.266655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.266721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.267034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.267102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.267427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.267492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.267803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.267868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.268162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.268228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.268549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.268613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.268916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.269007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.269331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.340 [2024-07-15 23:51:49.269396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.340 qpair failed and we were unable to recover it. 00:25:14.340 [2024-07-15 23:51:49.269655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.269720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.270043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.270109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.270417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.270483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.270789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.270853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.271165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.271232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.271520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.271586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.271901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.271995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.272279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.272357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.272677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.272743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.273071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.273138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.273457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.273522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.273810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.273874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.274133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.274201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.274513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.274578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.274842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.274907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.275219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.275319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.275651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.275720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.276028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.276339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.276410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.276680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.276748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.277066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.277133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.277465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.277534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.277862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.277928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.278214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.278278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.278548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.278613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.278884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.278949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.279248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.279312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.279602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.279668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.279939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.280017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.280291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.280359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.280598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.341 [2024-07-15 23:51:49.280666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.341 qpair failed and we were unable to recover it. 00:25:14.341 [2024-07-15 23:51:49.280936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.281021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.281263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.281328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.281646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.281711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.281973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.282039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.282364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.282429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.282697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.282762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.283070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.283137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.283420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.283486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.283765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.283831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.284153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.284224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.284430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.284480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.284696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.284745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.284990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.285039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.285215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.285265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.285469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.285520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.285727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.285776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.286024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.286060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.286218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.286258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.286438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.286473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.286606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.286640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.286772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.286806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.287059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.287095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.287224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.287258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.287485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.287535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.287745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.287794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.288034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.288071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.288806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.288877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.289152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.289188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.289368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.289410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.289657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.289698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.289910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.289944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.290124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.290159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.290374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.290408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.290531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.290565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.290733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.290774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.290996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.291032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.291191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.291225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.291389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.291431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.291612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.291653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.291926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.342 [2024-07-15 23:51:49.291975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.342 qpair failed and we were unable to recover it. 00:25:14.342 [2024-07-15 23:51:49.292117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.292151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.292360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.292408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.292622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.292677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.292885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.292920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.293073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.293113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.293283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.293318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.293526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.293576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.293779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.293828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.294027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.294062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.294244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.294279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.294467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.294501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.294833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.294906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.295093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.295128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.295264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.295309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.295489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.295523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.295803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.295868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.296096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.296131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.296289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.296326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.296597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.296668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.296843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.296878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.297034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.297070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.297197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.297259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.297430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.297464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.297626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.297660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.297873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.297908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.298040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.298075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.298198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.298252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.298508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.298558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.298807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.298856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.299055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.299089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.299241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.299275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.299454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.299489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.299617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.299676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.299921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.299964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.300117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.300151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.300268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.300311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.300464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.300497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.300652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.300687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.300915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.300949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.343 [2024-07-15 23:51:49.301139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.343 [2024-07-15 23:51:49.301173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.343 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.301363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.301397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.301618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.301664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.301888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.301940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.302084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.302117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.302274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.302350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.302612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.302673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.302925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.302970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.303101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.303135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.303297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.303345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.303592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.303656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.303892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.304072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.304107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.304276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.304309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.304469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.304533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.304814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.304847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.305002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.305037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.305193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.305228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.305502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.305566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.305909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.306001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.306175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.306210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.306363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.306397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.306554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.306587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.306872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.306936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.307175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.307210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.307384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.307419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.307581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.307614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.307743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.307777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.307905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.307939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.308117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.308152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.308346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.308379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.308575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.308608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.308787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.308822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.309055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.309095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.309226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.309260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.309408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.309457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.309679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.309713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.309863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.309896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.310093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.310128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.344 [2024-07-15 23:51:49.310283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.344 [2024-07-15 23:51:49.310317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.344 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.310495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.310569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.310747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.310782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.310932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.310972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.311117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.311150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.311335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.311369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.311569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.311626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.311781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.311815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.312070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.312245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.312441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.312629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.312848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.312996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.313031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.313247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.313281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.313408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.313443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.313597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.313640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.313821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.313855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.313977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.314021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.314197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.314235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.314366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.314401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.314530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.314563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.314807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.314842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.315001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.315045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.315246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.315291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.315450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.315484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.315638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.315694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.315881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.315914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.316077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.316112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.316314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.316349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.316476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.316510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.316648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.316683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.316882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.316916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.317101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.345 [2024-07-15 23:51:49.317135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.345 qpair failed and we were unable to recover it. 00:25:14.345 [2024-07-15 23:51:49.317297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.317331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.317475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.317514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.317668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.317849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.317882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.318067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.318229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.318417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.318816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.318994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.319031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.319157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.319191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.319346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.319379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.319558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.319592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.319803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.319838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.319976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.320022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.320181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.320225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.320421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.320454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.320582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.320616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.320794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.320828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.320966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.321170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.321334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.321526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.321771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.321935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.321978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.322107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.322141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.322318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.322351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.322480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.322513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.322696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.322735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.322868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.322902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.323086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.323120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.323346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.323379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.323527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.323560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.323742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.323777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.323918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.323952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.324099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.324133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.324287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.324320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.324473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.324507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.324662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.324697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.324850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.346 [2024-07-15 23:51:49.324884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.346 qpair failed and we were unable to recover it. 00:25:14.346 [2024-07-15 23:51:49.325048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.325082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.325197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.325239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.325398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.325433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.325584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.325619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.325798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.325833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.325994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.326031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.326210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.326243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.326380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.326414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.326569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.326604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.326780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.326843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.326973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.327131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.327401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.327585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.327775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.327927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.327974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.328120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.328155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.328365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.328417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.328602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.328645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.328797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.328830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.329927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.329966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.330130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.330164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.330321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.330355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.330513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.330547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.330729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.330767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.330921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.330976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.331146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.331180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.331335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.331370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.331517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.331783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.331833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.332102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.332164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.332427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.332487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.332800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.332860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.333145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.333234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.333460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.333506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.333705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.333751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.347 [2024-07-15 23:51:49.333949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.347 [2024-07-15 23:51:49.334032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.347 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.334314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.334390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.334669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.334746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.335035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.335085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.335270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.335341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.335613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.335647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.335781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.335817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.335944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.335989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.336297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.336373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.336693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.336769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.337069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.337110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.337303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.337337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.337491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.337526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.337661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.337695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.337845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.337880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.338128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.338178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.338449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.338504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.338802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.338862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.339131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.339177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.339378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.339426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.339677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.339754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.340037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.340117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.340388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.340434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.340632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.340703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.340951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.341030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.341317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.341394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.341655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.341704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.341931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.342003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.342256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.342332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.342596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.342657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.342941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.343010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.343220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.343266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.343540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.343615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.343910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.344019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.344298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.344375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.344693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.344771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.345059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.345120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.345377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.345438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.345726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.345775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.346014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.346095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.346423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.346499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.346766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.346844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.347085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.347163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.347451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.347528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.348 qpair failed and we were unable to recover it. 00:25:14.348 [2024-07-15 23:51:49.347837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.348 [2024-07-15 23:51:49.347897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.348217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.348295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.348575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.348623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.348860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.348908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.349248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.349326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.349586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.349632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.349909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.349950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.350125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.350167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.350398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.350446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.350675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.350723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.350986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.351055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.351306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.351382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.351633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.351728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.352030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.352094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.352295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.352341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.352556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.352634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.352924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.353006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.353312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.353371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.353655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.353704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.353909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.353994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.354256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.354316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.354568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.354645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.354858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.354919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.355168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.355210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.355390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.355431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.355684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.355762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.356067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.356350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.356391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.356563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.356603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.356879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.356924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.357190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.357268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.357589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.357665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.357908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.357980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.349 qpair failed and we were unable to recover it. 00:25:14.349 [2024-07-15 23:51:49.358279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.349 [2024-07-15 23:51:49.358357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.358635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.358710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.358993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.359057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.359334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.359411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.359651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.359730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.360022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.360083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.360396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.360473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.360765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.360842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.361117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.361178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.361470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.361546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.361784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.361863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.362156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.362198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.362417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.362491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.362819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.362896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.363215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.363294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.363587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.363667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.363978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.364048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.364370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.364436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.364710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.364756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.365043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.365123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.365439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.365527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.365860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.365938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.366164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.366224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.366437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.366483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.366713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.366759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.367018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.367078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.367362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.367439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.367728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.368039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.368099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.368395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.350 [2024-07-15 23:51:49.368451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.350 qpair failed and we were unable to recover it. 00:25:14.350 [2024-07-15 23:51:49.368703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.368781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.369064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.369419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.369504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.369743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.369823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.370158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.370236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.370512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.370590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.370857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.370906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.371159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.371246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.371572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.371650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.371900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.371975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.372273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.372362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.372650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.372727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.372998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.373065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.373387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.373780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.373859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.374152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.374216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.374379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.374425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.374596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.374877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.374937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.375232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.375310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.375593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.375670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.375968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.376035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.376311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.376379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.376654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.376731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.377002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.377051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.377247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.377316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.377595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.377673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.377975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.378042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.378315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.378376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.378681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.379004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.379067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.379397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.379443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.379678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.379724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.380015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.380076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.380391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.380468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.380739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.380817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.381091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.381154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.381472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.381549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.381823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.381901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.382163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.382234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.382554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.382630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.382917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.382990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.383279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.351 [2024-07-15 23:51:49.383358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3888490 Killed "${NVMF_APP[@]}" "$@" 00:25:14.351 qpair failed and we were unable to recover it. 00:25:14.351 [2024-07-15 23:51:49.383670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.383748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.384018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.384080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.384357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.384435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.384710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.352 [2024-07-15 23:51:49.384790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.385023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.352 [2024-07-15 23:51:49.385075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.385267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.352 [2024-07-15 23:51:49.385317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.385590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.385667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.385932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.386160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.386371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.386556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.386713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.386902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.386936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.387090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.387125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.387272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.387306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.387481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.387515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.387691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.387725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.387843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.387877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.388027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.388062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.388188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.388363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.388397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.388580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.388614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.388770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.388804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.389031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.389067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.389194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.389230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.389453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.389499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.389747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.389815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.390045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.390081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3889043 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:14.352 [2024-07-15 23:51:49.390238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3889043 00:25:14.352 [2024-07-15 23:51:49.390273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.390433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.390471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3889043 ']' 00:25:14.352 [2024-07-15 23:51:49.390695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.390730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.390858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.352 [2024-07-15 23:51:49.390893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.352 [2024-07-15 23:51:49.391068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.352 [2024-07-15 23:51:49.391221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 23:51:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.352 [2024-07-15 23:51:49.391388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.391568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.391764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.391948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.391991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.392128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.392163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.392289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.392324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.352 qpair failed and we were unable to recover it. 00:25:14.352 [2024-07-15 23:51:49.392462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.352 [2024-07-15 23:51:49.392497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.392652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.392688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.392844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.392885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.393066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.393102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.393240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.393274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.393443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.393488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.393632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.393667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.393852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.393888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.394919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.394972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.395109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.395151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.395307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.395343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.395483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.395518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.395667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.395701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.395854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.395898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.396427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.396594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.396804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.396989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.397180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.397336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.397521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.397737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.397899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.397934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.398100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.398135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.398265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.398303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.398457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.398491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.398663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.398698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.398830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.398865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.399963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.399997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.400143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.400178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.400293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.400325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.400502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.400555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.400726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.400766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.400947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.400992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.401119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.401151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.353 [2024-07-15 23:51:49.401277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.353 [2024-07-15 23:51:49.401312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.353 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.401441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.401479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.401693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.401740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.401948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.402002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.402146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.402181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.402393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.402435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.402578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.402619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.402764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.402825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.403041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.403199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.403395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.403623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.403808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.403975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.404029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.404181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.404215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.404459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.404514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.404719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.404779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.404984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.405038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.405204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.405239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.405380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.405442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.405665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.405723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.405885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.405920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.406076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.406112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.406247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.406286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.406402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.406437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.406682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.406715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.406953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.407001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.407122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.407158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.407358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.407398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.407574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.407631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.407810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.407854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.408043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.408078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.408258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.408292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.408543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.408577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.408777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.408818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.409036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.409071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.409230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.409267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.409513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.354 [2024-07-15 23:51:49.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.354 qpair failed and we were unable to recover it. 00:25:14.354 [2024-07-15 23:51:49.409774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.409816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.409974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.410028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.410164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.410200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.410364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.410398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.410558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.410613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.410815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.410863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.411068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.411104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.411243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.411277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.411501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.411577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.411817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.411850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.411990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.412025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.412183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.412219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.412408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.412443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.412607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.412661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.412893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.412927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.413082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.413127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.413295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.413371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.413674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.413740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.413897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.413933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.414086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.414127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.414264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.414300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.414556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.414591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.414872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.414930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.415151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.415186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.415402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.415477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.415770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.415829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.416021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.416055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.416205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.416241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.416565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.416660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.416928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.417025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.417200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.417235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.417415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.417489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.417671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.417760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.418034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.418069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.418196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.418231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.418358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.418392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.418579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.418647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.418875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.418936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.419152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.419186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.419387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.419441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.419688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.419745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.419994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.420050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.420185] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.420219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.420416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.420510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.420752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.420809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.421030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.421066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.421183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.421222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.421420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.421484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.421694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.421751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.422009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.422045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.422200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.422236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.422407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.422463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.422655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.422715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.422990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.423046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.423162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.423205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.423342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.423376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.423525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.355 [2024-07-15 23:51:49.423583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.355 qpair failed and we were unable to recover it. 00:25:14.355 [2024-07-15 23:51:49.423811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.423864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.424124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.424159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.424313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.424348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.424632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.424689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.424868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.424902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.425065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.425100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.425229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.425274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.425455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.425506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.425722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.425756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.425884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.425918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.426087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.426121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.426264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.426303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.426427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.426461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.426616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.426650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.426821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.426877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.427089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.427124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.427257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.427293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.427512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.427766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.427818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.428047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.428084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.428234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.428275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.428428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.428501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.428753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.428804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.429044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.429080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.429258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.429312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.429534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.429586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.429808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.429861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.430045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.430084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.430240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.430336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.430540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.430575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.430705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.430744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.430897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.430932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.431090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.431126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.431271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.431306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.431448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.431500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.431770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.431805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.431974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.432013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.432193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.432228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.432361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.432395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.432643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.432716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.432870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.432913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.433083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.433118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.433265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.433299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.433480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.433542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.433782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.433842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.434081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.434122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.434289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.434324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.434522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.434574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.434808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.434861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.435038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.435082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.435205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.356 [2024-07-15 23:51:49.435251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.356 qpair failed and we were unable to recover it. 00:25:14.356 [2024-07-15 23:51:49.435471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.435505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.435661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.435695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.435902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.436087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.436125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.436293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.436676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.436745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.436942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.437031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.437168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.437202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.437447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.437501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.437798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.437859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.438028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.438073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.438137] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:25:14.357 [2024-07-15 23:51:49.438215] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.357 [2024-07-15 23:51:49.438266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.438312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.438432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.438464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.438596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.438630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.438856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.439090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.439132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.439289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.439362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.439554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.439646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.439855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.439932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.440201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.440240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.440400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.440435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.440595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.440647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.440881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.440933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.441154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.441189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.441332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.441404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.441678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.441743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.441909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.441971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.357 [2024-07-15 23:51:49.442150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.357 [2024-07-15 23:51:49.442191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.357 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.442378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.442413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.442651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.442710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.442874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.442913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.443088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.443132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.443259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.443300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.640 [2024-07-15 23:51:49.443446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.640 [2024-07-15 23:51:49.443498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.640 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.443685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.443749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.443910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.443946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.444102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.444142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.444307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.444342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.444539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.444591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.444786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.444819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.444983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.445023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.445191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.445476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.445552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.445823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.445857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.446014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.446049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.446177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.446215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.446533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.446604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.446801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.446835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.446990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.447025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.447174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.447211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.447458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.447511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.447699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.447761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.447975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.448011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.448161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.448195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.448343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.448404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.448623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.448685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.448873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.448927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.449119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.449155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.449305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.449339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.449478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.449538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.449771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.449807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.449970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.450120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.450304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.450481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.450669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.450855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.450889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.451064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.451100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.451254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.451288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.451407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.451441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.451601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.451637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.451792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.451852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.452030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.452065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.452220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.452259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.641 [2024-07-15 23:51:49.452441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.641 [2024-07-15 23:51:49.452477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.641 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.452667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.452845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.452879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.453004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.453039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.453176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.453398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.453458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.453655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.453706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.453917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.453982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.454147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.454182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.454317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.454381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.454614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.454653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.454814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.454848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.455002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.455036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.455205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.455239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.455380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.455417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.455595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.455647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.455881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.456124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.456160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.456366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.456423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.456652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.456691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.456861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.456899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.457081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.457297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.457468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.457662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.457852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.457987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.458027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.458156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.458219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.458455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.458509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.458710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.458860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.458893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.459050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.459223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.459487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.459657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.459852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.459983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.460018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.460174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.460207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.460458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.460501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.460746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.461034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.461070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.461198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.642 [2024-07-15 23:51:49.461232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.642 qpair failed and we were unable to recover it. 00:25:14.642 [2024-07-15 23:51:49.461365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.461399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.461580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.461619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.461802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.461836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.462004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.462039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.462218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.462262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.462381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.462415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.462678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.462727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.462930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.463028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.463153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.463205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.463490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.463560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.463766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.463819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.464059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.464094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.464250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.464313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.464587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.464635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.464841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.464890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.465055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.465090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.465241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.465275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.465516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.465551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.465710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.465744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.465988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.466148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.466337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.466530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.466699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.466880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.466913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.467065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.467114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.467261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.467295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.467437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.467470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.467653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.467713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.467934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.467983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.468132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.468391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.468447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.468695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.468730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.468852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.468886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.469045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.469081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.469238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.469272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.469467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.469501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.469684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.469746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.469976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.470011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.470173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.470207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.470450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.470498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.470670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.470721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.470931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.470974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.471110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.643 [2024-07-15 23:51:49.471144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.643 qpair failed and we were unable to recover it. 00:25:14.643 [2024-07-15 23:51:49.471335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.471371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.471490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.471525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.471673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.471709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.471872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.471908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.472088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.472274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.472444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.472634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.472804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.472989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.473025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.473182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.473215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.473392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.473428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.473593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.473628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.473760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.473795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.474035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.474205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.474388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.474585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.474780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.474985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.475020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.475158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.475192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.475359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.475394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.475524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.475558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.644 [2024-07-15 23:51:49.475679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.475714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.476813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.476882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.477108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.477144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.477328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.477380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.477597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.477650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.477860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.477915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.478119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.478154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.478307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.478341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.478588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.478623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.478860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.478910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.479136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.479174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.479309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.479347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.479503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.479561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.479735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.479761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.479883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.479911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.480103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.644 qpair failed and we were unable to recover it. 00:25:14.644 [2024-07-15 23:51:49.480233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.644 [2024-07-15 23:51:49.480263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.480370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.480396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.480502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.480529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.480622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.480649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.480740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.480766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.480873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.480901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.481933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.481981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.482847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.482887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.483861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.483888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.484963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.484993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.485123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.485150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.485279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.485305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.485414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.485444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.485549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.485577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.645 [2024-07-15 23:51:49.485677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.645 [2024-07-15 23:51:49.485704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.645 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.485831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.485859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.485987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.486921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.486963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.487893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.487921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.488969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.488995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.489930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.489965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.490879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.490909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.491895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.491921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.646 qpair failed and we were unable to recover it. 00:25:14.646 [2024-07-15 23:51:49.492033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.646 [2024-07-15 23:51:49.492062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.492903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.492930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.493895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.493922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.494858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.494887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.495881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.495994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.496975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.497851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.497976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.647 [2024-07-15 23:51:49.498666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.647 qpair failed and we were unable to recover it. 00:25:14.647 [2024-07-15 23:51:49.498797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.498828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.498925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.498967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.499932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.499976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.500871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.500901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.501851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.501879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.502921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.502963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.503912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.503938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.504058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.504085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.504204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.504231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.504338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.504365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.504490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.648 [2024-07-15 23:51:49.504517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.648 qpair failed and we were unable to recover it. 00:25:14.648 [2024-07-15 23:51:49.504601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.504627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.504750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.504777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.504909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.504935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.505922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.505966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.506847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.506979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.507869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.507896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.508876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.508904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.509906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.509932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.510879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.510918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.511032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.511072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.511208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.511236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.649 [2024-07-15 23:51:49.511368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.649 [2024-07-15 23:51:49.511395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.649 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.511510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.650 [2024-07-15 23:51:49.511516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.511544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.511668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.511694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.511814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.511952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.512883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.512911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.513943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.513995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.514855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.514986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.515912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.515939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.650 [2024-07-15 23:51:49.516816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.650 qpair failed and we were unable to recover it. 00:25:14.650 [2024-07-15 23:51:49.516913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.516951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.517902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.517928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.518899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.518925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.519906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.519935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.520906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.520933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.521916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.521964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.522900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.522928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.651 qpair failed and we were unable to recover it. 00:25:14.651 [2024-07-15 23:51:49.523946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.651 [2024-07-15 23:51:49.523981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.524863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.524890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.525944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.525981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.526887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.526913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.527966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.527993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.528113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.528139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.528240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.528267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.528387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.528414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.652 qpair failed and we were unable to recover it. 00:25:14.652 [2024-07-15 23:51:49.528542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.652 [2024-07-15 23:51:49.528568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.528689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.528715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.528868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.528894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.529874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.529902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.530856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.530982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.531896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.531925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.532939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.532984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.533849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.533975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.534959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.534986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.653 [2024-07-15 23:51:49.535835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.653 [2024-07-15 23:51:49.535863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.653 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.535953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.535986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.536972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.536999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.537936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.537975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.538855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.538980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.539869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.539897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.540863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.540891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.654 [2024-07-15 23:51:49.541810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.654 [2024-07-15 23:51:49.541849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.654 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.541950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.541984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.542883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.542986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.543873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.543976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.544903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.544930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.545860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.545900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.546953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.546989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.547834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.547861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.548872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.549014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.549043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.655 qpair failed and we were unable to recover it. 00:25:14.655 [2024-07-15 23:51:49.549138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.655 [2024-07-15 23:51:49.549165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.549319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.549445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.549582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.549876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.549979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.550842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.550972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.551858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.551988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.552857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.552982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.553856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.553884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.554038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.554185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.554340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.554461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.656 [2024-07-15 23:51:49.554586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.656 qpair failed and we were unable to recover it. 00:25:14.656 [2024-07-15 23:51:49.554715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.554744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.554871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.554909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.555944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.555979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.556960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.556988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.557919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.557966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.558839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.558880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.559945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.559981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.560934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.560978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.657 [2024-07-15 23:51:49.561890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.657 [2024-07-15 23:51:49.561916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.657 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.562969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.562997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.563908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.563935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.564858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.564990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.565855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.565896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.566897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.566923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.567965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.567993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.568881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.568909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.569839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.569879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.570030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.570059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.658 [2024-07-15 23:51:49.570188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.658 [2024-07-15 23:51:49.570217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.658 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.570330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.570358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.570454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.570481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.570606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.570632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.570769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.570796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.570964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.571857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.571995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.572853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.572881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.573893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.573921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.574838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.574982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.575843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.575983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.576853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.576880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.659 [2024-07-15 23:51:49.577780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.659 qpair failed and we were unable to recover it. 00:25:14.659 [2024-07-15 23:51:49.577900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.577926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.578897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.578923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.579944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.580962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.580988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.581885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.581994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.582962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.582990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.583869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.583976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.584872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.584997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585122] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.660 qpair failed and we were unable to recover it. 00:25:14.660 [2024-07-15 23:51:49.585800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.660 [2024-07-15 23:51:49.585827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.585952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.585985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.586905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.586933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.587980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.588869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.588980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.589921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.589973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.590951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.591869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.591999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.592899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.592925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.593884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.593912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.594014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.661 [2024-07-15 23:51:49.594042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.661 qpair failed and we were unable to recover it. 00:25:14.661 [2024-07-15 23:51:49.594148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.594880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.594990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.595907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.595934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.596827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.596868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.597863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.597893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.598864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.598893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.599907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.599946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.600844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.600978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.601921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.601969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.602100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.602128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.602230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.662 [2024-07-15 23:51:49.602257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.662 qpair failed and we were unable to recover it. 00:25:14.662 [2024-07-15 23:51:49.602349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.602375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.602500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.602526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.602619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.602645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.602748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.602775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.602935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.602968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.603853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.603993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.604953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.604993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.605934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.605967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.606853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.606879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.607884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.607911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.608919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.608945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.609080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.609107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.609228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.663 [2024-07-15 23:51:49.609258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.663 qpair failed and we were unable to recover it. 00:25:14.663 [2024-07-15 23:51:49.609384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.609411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.609528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.609554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.609657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.609683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.609780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.609807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.609927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.609961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.610950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.610984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.611844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.611872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.612971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.612999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.613911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.613951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.614842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.614977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.615947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.615984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.616869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.616988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.617868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.617895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.618019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.618049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.664 [2024-07-15 23:51:49.618140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.664 [2024-07-15 23:51:49.618168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.664 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.618292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.618414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.618563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.618708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.618855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.618982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.619871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.619897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.620953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.620991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.621899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.621926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.622904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.622931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.623887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.623974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.624880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.624997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.625930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.625964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.626088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.626252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.626365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.626477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.665 [2024-07-15 23:51:49.626604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b9[2024-07-15 23:51:49.626598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.665 0 with addr=10.0.0.2, port=4420 00:25:14.665 qpair failed and we were unable to recover it. 00:25:14.665 [2024-07-15 23:51:49.626633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.666 [2024-07-15 23:51:49.626649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.666 [2024-07-15 23:51:49.626661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.666 [2024-07-15 23:51:49.626671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.666 [2024-07-15 23:51:49.626736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.626787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.626727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:14.666 [2024-07-15 23:51:49.626774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:14.666 [2024-07-15 23:51:49.627016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.626875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:14.666 [2024-07-15 23:51:49.626878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.666 [2024-07-15 23:51:49.627043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.627969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.627996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.628915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.628944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.629864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.629892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.630921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.630948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.631871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.631975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.632940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.632980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.666 qpair failed and we were unable to recover it. 00:25:14.666 [2024-07-15 23:51:49.633858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.666 [2024-07-15 23:51:49.633886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.633988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.634868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.634895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.635960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.635989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.636888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.636985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.637965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.637994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.638919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.638946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.639883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.639913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.640844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.640872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.641867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.641908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.642033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.642062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.642167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.642197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.642326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.667 [2024-07-15 23:51:49.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.667 qpair failed and we were unable to recover it. 00:25:14.667 [2024-07-15 23:51:49.642454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.642481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.642578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.642605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.642700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.642727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.642847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.642880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.642975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.643959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.643987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.644909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.644937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.645872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.645900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7200 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.646871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.646973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.647897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.647924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.648865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb84000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.648965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.649096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.649218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.649399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.649523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.668 [2024-07-15 23:51:49.649652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.668 [2024-07-15 23:51:49.649679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.668 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.649769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.649796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.649887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.649914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.650922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.650949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.651900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.651998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.652908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.652936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.653921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.653948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.654871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.654966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.655929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.655964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.656879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.656906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.669 [2024-07-15 23:51:49.657733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.669 qpair failed and we were unable to recover it. 00:25:14.669 [2024-07-15 23:51:49.657857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.657884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.657978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.658929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.658966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.659895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.659992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.660909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.660935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.661920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.662862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.662977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.663882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.663980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.664904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.664930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.670 qpair failed and we were unable to recover it. 00:25:14.670 [2024-07-15 23:51:49.665766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.670 [2024-07-15 23:51:49.665793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.665906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.665932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.666856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.666972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.667919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.667946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.668953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.668999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.669940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.670900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.670996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.671882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.671909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.672033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.672160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.672287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.671 [2024-07-15 23:51:49.672538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.671 qpair failed and we were unable to recover it. 00:25:14.671 [2024-07-15 23:51:49.672644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.672670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.672773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.672801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.672919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.672951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.673966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.673994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.674885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.675949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.675983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.676892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.676991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.677940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.677974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.678928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.678960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.679972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.679999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.680093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.680121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.680236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.680263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.680361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.680388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.680478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.672 [2024-07-15 23:51:49.680505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.672 qpair failed and we were unable to recover it. 00:25:14.672 [2024-07-15 23:51:49.680627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.680743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.680770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.680895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.680922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.681920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.681948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.682897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.682924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.683927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.683953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.684967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.684994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.685876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.685902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.686931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.686966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.687875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.687980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.688007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.688102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.688129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.688289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.688315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.688411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.688438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb94000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.688526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.673 [2024-07-15 23:51:49.688554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.673 qpair failed and we were unable to recover it. 00:25:14.673 [2024-07-15 23:51:49.688699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.688730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.688829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.688857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.688979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.689918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.689945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 [2024-07-15 23:51:49.690821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.690848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feb8c000b90 with addr=10.0.0.2, port=4420 00:25:14.674 qpair failed and we were unable to recover it. 00:25:14.674 A controller has encountered a failure and is being reset. 00:25:14.674 [2024-07-15 23:51:49.691007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.674 [2024-07-15 23:51:49.691055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b50e0 with addr=10.0.0.2, port=4420 00:25:14.674 [2024-07-15 23:51:49.691076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b50e0 is same with the state(5) to be set 00:25:14.674 [2024-07-15 23:51:49.691103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b50e0 (9): Bad file descriptor 00:25:14.674 [2024-07-15 23:51:49.691121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.674 [2024-07-15 23:51:49.691136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.674 [2024-07-15 23:51:49.691151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.674 Unable to reset the controller. 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 Malloc0 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 [2024-07-15 23:51:50.432048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 [2024-07-15 23:51:50.460288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.604 23:51:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3888519 00:25:15.862 Controller properly reset. 00:25:21.118 Initializing NVMe Controllers 00:25:21.118 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:21.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:21.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:21.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:21.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:21.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:21.118 Initialization complete. Launching workers. 00:25:21.118 Starting thread on core 1 00:25:21.118 Starting thread on core 2 00:25:21.118 Starting thread on core 3 00:25:21.118 Starting thread on core 0 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:21.118 00:25:21.118 real 0m10.760s 00:25:21.118 user 0m34.176s 00:25:21.118 sys 0m7.252s 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.118 ************************************ 00:25:21.118 END TEST nvmf_target_disconnect_tc2 00:25:21.118 ************************************ 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.118 rmmod nvme_tcp 00:25:21.118 rmmod nvme_fabrics 00:25:21.118 rmmod nvme_keyring 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3889043 ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3889043 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3889043 ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3889043 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3889043 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3889043' 00:25:21.118 killing process with pid 3889043 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3889043 00:25:21.118 23:51:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3889043 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.118 23:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.043 23:51:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.043 00:25:23.043 real 0m15.606s 00:25:23.043 user 0m59.574s 00:25:23.043 sys 0m9.783s 00:25:23.043 23:51:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.043 23:51:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 ************************************ 00:25:23.043 END TEST nvmf_target_disconnect 00:25:23.043 ************************************ 00:25:23.043 23:51:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:23.043 23:51:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:23.043 23:51:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.043 23:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 23:51:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:23.043 00:25:23.043 real 19m10.307s 00:25:23.043 user 45m23.168s 00:25:23.043 sys 4m56.706s 00:25:23.043 23:51:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.043 23:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 ************************************ 00:25:23.043 END TEST nvmf_tcp 00:25:23.043 ************************************ 00:25:23.314 23:51:58 -- common/autotest_common.sh@1142 -- # return 0 00:25:23.314 23:51:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:23.314 23:51:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:23.314 23:51:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:23.314 23:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.314 23:51:58 -- common/autotest_common.sh@10 -- # set +x 00:25:23.314 ************************************ 00:25:23.314 START TEST spdkcli_nvmf_tcp 00:25:23.314 ************************************ 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:23.314 * Looking for test storage... 00:25:23.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.314 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3890126 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3890126 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3890126 ']' 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.315 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.315 [2024-07-15 23:51:58.304762] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:25:23.315 [2024-07-15 23:51:58.304851] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890126 ] 00:25:23.315 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.315 [2024-07-15 23:51:58.364165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:23.572 [2024-07-15 23:51:58.478173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.572 [2024-07-15 23:51:58.478176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.572 23:51:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:23.572 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:23.572 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:23.572 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:23.572 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:23.572 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:23.572 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:23.572 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.572 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.572 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.572 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:23.573 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:23.573 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:23.573 ' 00:25:26.094 [2024-07-15 23:52:01.107182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.465 [2024-07-15 23:52:02.323316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:29.991 [2024-07-15 23:52:04.586248] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:31.888 [2024-07-15 23:52:06.536403] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:33.262 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:33.262 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:33.262 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.262 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.262 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:33.262 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:33.262 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:33.262 23:52:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.520 23:52:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:33.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:33.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:33.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:33.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:33.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:33.520 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:33.520 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:33.520 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:33.520 ' 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:38.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:38.784 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:38.784 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:38.784 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3890126 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3890126 ']' 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3890126 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890126 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890126' 00:25:38.784 killing process with pid 3890126 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3890126 00:25:38.784 23:52:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3890126 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3890126 ']' 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3890126 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3890126 ']' 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3890126 00:25:39.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3890126) - No such process 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3890126 is not found' 00:25:39.043 Process with pid 3890126 is not found 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:39.043 00:25:39.043 real 0m15.960s 00:25:39.043 user 0m33.619s 00:25:39.043 sys 0m0.803s 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:39.043 23:52:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:39.043 ************************************ 00:25:39.043 END TEST spdkcli_nvmf_tcp 00:25:39.043 ************************************ 00:25:39.302 23:52:14 -- common/autotest_common.sh@1142 -- # return 0 00:25:39.302 23:52:14 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:39.302 23:52:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:39.302 23:52:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.302 23:52:14 -- common/autotest_common.sh@10 -- # set +x 00:25:39.302 ************************************ 00:25:39.302 START TEST nvmf_identify_passthru 00:25:39.302 ************************************ 00:25:39.302 23:52:14 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:39.302 * Looking for test storage... 00:25:39.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:39.302 23:52:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.302 23:52:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:39.302 23:52:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.302 23:52:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.302 23:52:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:39.302 23:52:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.302 23:52:14 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.302 23:52:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:41.201 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:41.201 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:41.201 Found net devices under 0000:09:00.0: cvl_0_0 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:41.201 Found net devices under 0000:09:00.1: cvl_0_1 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.201 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.459 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.459 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.459 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:25:41.460 00:25:41.460 --- 10.0.0.2 ping statistics --- 00:25:41.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.460 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:41.460 00:25:41.460 --- 10.0.0.1 ping statistics --- 00:25:41.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.460 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.460 23:52:16 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:25:41.460 23:52:16 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:41.460 23:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:41.460 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.644 23:52:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:25:45.644 23:52:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:25:45.644 23:52:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:45.644 23:52:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:45.644 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3894749 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.828 23:52:24 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3894749 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3894749 ']' 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.828 23:52:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.828 [2024-07-15 23:52:24.805898] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:25:49.828 [2024-07-15 23:52:24.806009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.828 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.828 [2024-07-15 23:52:24.870022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.086 [2024-07-15 23:52:24.978634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.086 [2024-07-15 23:52:24.978684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.086 [2024-07-15 23:52:24.978698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.086 [2024-07-15 23:52:24.978723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.086 [2024-07-15 23:52:24.978733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.086 [2024-07-15 23:52:24.978815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.086 [2024-07-15 23:52:24.978881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.086 [2024-07-15 23:52:24.978949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.086 [2024-07-15 23:52:24.978946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:50.086 23:52:25 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:50.086 INFO: Log level set to 20 00:25:50.086 INFO: Requests: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "method": "nvmf_set_config", 00:25:50.086 "id": 1, 00:25:50.086 "params": { 00:25:50.086 "admin_cmd_passthru": { 00:25:50.086 "identify_ctrlr": true 00:25:50.086 } 00:25:50.086 } 00:25:50.086 } 00:25:50.086 00:25:50.086 INFO: response: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "id": 1, 00:25:50.086 "result": true 00:25:50.086 } 00:25:50.086 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.086 23:52:25 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.086 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:50.086 INFO: Setting log level to 20 00:25:50.086 INFO: Setting log level to 20 00:25:50.086 INFO: Log level set to 20 00:25:50.086 INFO: Log level set to 20 00:25:50.086 INFO: Requests: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "method": "framework_start_init", 00:25:50.086 "id": 1 00:25:50.086 } 00:25:50.086 00:25:50.086 INFO: Requests: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "method": "framework_start_init", 00:25:50.086 "id": 1 00:25:50.086 } 00:25:50.086 00:25:50.086 [2024-07-15 23:52:25.122113] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:50.086 INFO: response: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "id": 1, 00:25:50.086 "result": true 00:25:50.086 } 00:25:50.086 00:25:50.086 INFO: response: 00:25:50.086 { 00:25:50.086 "jsonrpc": "2.0", 00:25:50.086 "id": 1, 00:25:50.086 "result": true 00:25:50.087 } 00:25:50.087 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.087 23:52:25 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:50.087 INFO: Setting log level to 40 00:25:50.087 INFO: Setting log level to 40 00:25:50.087 INFO: Setting log level to 40 00:25:50.087 [2024-07-15 23:52:25.132128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.087 23:52:25 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:50.087 23:52:25 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.087 23:52:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.392 Nvme0n1 00:25:53.392 23:52:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.392 23:52:27 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:53.392 23:52:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.392 23:52:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.392 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.392 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:53.392 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.392 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.392 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.392 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.392 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.393 [2024-07-15 23:52:28.021217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.393 [ 00:25:53.393 { 00:25:53.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:53.393 "subtype": "Discovery", 00:25:53.393 "listen_addresses": [], 00:25:53.393 "allow_any_host": true, 00:25:53.393 "hosts": [] 00:25:53.393 }, 00:25:53.393 { 00:25:53.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.393 "subtype": "NVMe", 00:25:53.393 "listen_addresses": [ 00:25:53.393 { 00:25:53.393 "trtype": "TCP", 00:25:53.393 "adrfam": "IPv4", 00:25:53.393 "traddr": "10.0.0.2", 00:25:53.393 "trsvcid": "4420" 00:25:53.393 } 00:25:53.393 ], 00:25:53.393 "allow_any_host": true, 00:25:53.393 "hosts": [], 00:25:53.393 "serial_number": "SPDK00000000000001", 00:25:53.393 "model_number": "SPDK bdev Controller", 00:25:53.393 "max_namespaces": 1, 00:25:53.393 "min_cntlid": 1, 00:25:53.393 "max_cntlid": 65519, 00:25:53.393 "namespaces": [ 00:25:53.393 { 00:25:53.393 "nsid": 1, 00:25:53.393 "bdev_name": "Nvme0n1", 00:25:53.393 "name": "Nvme0n1", 00:25:53.393 "nguid": "E26DE4EF2E264E30B3E23BAEC9111010", 00:25:53.393 "uuid": "e26de4ef-2e26-4e30-b3e2-3baec9111010" 00:25:53.393 } 00:25:53.393 ] 00:25:53.393 } 00:25:53.393 ] 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:53.393 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:53.393 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:53.393 23:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.393 rmmod nvme_tcp 00:25:53.393 rmmod nvme_fabrics 00:25:53.393 rmmod nvme_keyring 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3894749 ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3894749 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3894749 ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3894749 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894749 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894749' 00:25:53.393 killing process with pid 3894749 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3894749 00:25:53.393 23:52:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3894749 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.841 23:52:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.841 23:52:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:54.841 23:52:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.375 23:52:31 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.375 00:25:57.375 real 0m17.780s 00:25:57.375 user 0m26.105s 00:25:57.375 sys 0m2.287s 00:25:57.375 23:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.375 23:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:57.375 ************************************ 00:25:57.375 END TEST nvmf_identify_passthru 00:25:57.375 ************************************ 00:25:57.375 23:52:32 -- common/autotest_common.sh@1142 -- # return 0 00:25:57.376 23:52:32 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:57.376 23:52:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:57.376 23:52:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.376 23:52:32 -- common/autotest_common.sh@10 -- # set +x 00:25:57.376 ************************************ 00:25:57.376 START TEST nvmf_dif 00:25:57.376 ************************************ 00:25:57.376 23:52:32 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:57.376 * Looking for test storage... 00:25:57.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.376 23:52:32 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.376 23:52:32 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.376 23:52:32 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.376 23:52:32 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.376 23:52:32 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.376 23:52:32 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.376 23:52:32 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:57.376 23:52:32 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:57.376 23:52:32 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.376 23:52:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:57.376 23:52:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.376 23:52:32 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.376 23:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:59.285 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:59.285 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:59.285 Found net devices under 0000:09:00.0: cvl_0_0 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:59.285 Found net devices under 0000:09:00.1: cvl_0_1 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.285 23:52:34 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:59.286 00:25:59.286 --- 10.0.0.2 ping statistics --- 00:25:59.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.286 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:25:59.286 00:25:59.286 --- 10.0.0.1 ping statistics --- 00:25:59.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.286 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:59.286 23:52:34 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:00.220 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:00.220 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:00.478 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:00.478 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:00.478 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:00.478 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:00.478 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:00.478 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:00.478 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:00.478 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:00.478 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:00.478 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:00.478 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:00.478 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:00.478 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:00.478 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:00.478 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.478 23:52:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:00.478 23:52:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3897889 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:00.478 23:52:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3897889 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3897889 ']' 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.478 23:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.737 [2024-07-15 23:52:35.624245] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:26:00.737 [2024-07-15 23:52:35.624316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.737 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.737 [2024-07-15 23:52:35.684014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.737 [2024-07-15 23:52:35.783917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.737 [2024-07-15 23:52:35.783980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.737 [2024-07-15 23:52:35.783994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.737 [2024-07-15 23:52:35.784004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.737 [2024-07-15 23:52:35.784014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.737 [2024-07-15 23:52:35.784040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:00.995 23:52:35 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.995 23:52:35 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.995 23:52:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:00.995 23:52:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.995 [2024-07-15 23:52:35.920488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.995 23:52:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.995 23:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.995 ************************************ 00:26:00.995 START TEST fio_dif_1_default 00:26:00.995 ************************************ 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.995 bdev_null0 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.995 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.996 [2024-07-15 23:52:35.976756] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.996 { 00:26:00.996 "params": { 00:26:00.996 "name": "Nvme$subsystem", 00:26:00.996 "trtype": "$TEST_TRANSPORT", 00:26:00.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.996 "adrfam": "ipv4", 00:26:00.996 "trsvcid": "$NVMF_PORT", 00:26:00.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.996 "hdgst": ${hdgst:-false}, 00:26:00.996 "ddgst": ${ddgst:-false} 00:26:00.996 }, 00:26:00.996 "method": "bdev_nvme_attach_controller" 00:26:00.996 } 00:26:00.996 EOF 00:26:00.996 )") 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:00.996 23:52:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.996 "params": { 00:26:00.996 "name": "Nvme0", 00:26:00.996 "trtype": "tcp", 00:26:00.996 "traddr": "10.0.0.2", 00:26:00.996 "adrfam": "ipv4", 00:26:00.996 "trsvcid": "4420", 00:26:00.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.996 "hdgst": false, 00:26:00.996 "ddgst": false 00:26:00.996 }, 00:26:00.996 "method": "bdev_nvme_attach_controller" 00:26:00.996 }' 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:00.996 23:52:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:01.254 fio-3.35 00:26:01.254 Starting 1 thread 00:26:01.254 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.454 00:26:13.454 filename0: (groupid=0, jobs=1): err= 0: pid=3898120: Mon Jul 15 23:52:46 2024 00:26:13.454 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:26:13.454 slat (nsec): min=4077, max=29485, avg=9525.73, stdev=2399.88 00:26:13.454 clat (usec): min=572, max=46274, avg=21068.41, stdev=20312.80 00:26:13.454 lat (usec): min=581, max=46287, avg=21077.94, stdev=20312.87 00:26:13.454 clat percentiles (usec): 00:26:13.454 | 1.00th=[ 611], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 668], 00:26:13.454 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[41157], 60.00th=[41157], 00:26:13.454 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:13.454 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:26:13.454 | 99.99th=[46400] 00:26:13.454 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:26:13.454 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:26:13.454 lat (usec) : 750=47.36%, 1000=2.43% 00:26:13.454 lat (msec) : 50=50.21% 00:26:13.454 cpu : usr=89.82%, sys=9.92%, ctx=14, majf=0, minf=232 00:26:13.454 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.454 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.454 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.454 00:26:13.454 Run status group 0 (all jobs): 00:26:13.454 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7584KiB (7766kB), run=10001-10001msec 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 00:26:13.454 real 0m11.125s 00:26:13.454 user 0m10.075s 00:26:13.454 sys 0m1.234s 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 ************************************ 00:26:13.454 END TEST fio_dif_1_default 00:26:13.454 ************************************ 00:26:13.454 23:52:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:13.454 23:52:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:13.454 23:52:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:13.454 23:52:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 ************************************ 00:26:13.454 START TEST fio_dif_1_multi_subsystems 00:26:13.454 ************************************ 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 bdev_null0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 [2024-07-15 23:52:47.155856] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 bdev_null1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.454 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.455 { 00:26:13.455 "params": { 00:26:13.455 "name": "Nvme$subsystem", 00:26:13.455 "trtype": "$TEST_TRANSPORT", 00:26:13.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.455 "adrfam": "ipv4", 00:26:13.455 "trsvcid": "$NVMF_PORT", 00:26:13.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.455 "hdgst": ${hdgst:-false}, 00:26:13.455 "ddgst": ${ddgst:-false} 00:26:13.455 }, 00:26:13.455 "method": "bdev_nvme_attach_controller" 00:26:13.455 } 00:26:13.455 EOF 00:26:13.455 )") 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.455 { 00:26:13.455 "params": { 00:26:13.455 "name": "Nvme$subsystem", 00:26:13.455 "trtype": "$TEST_TRANSPORT", 00:26:13.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.455 "adrfam": "ipv4", 00:26:13.455 "trsvcid": "$NVMF_PORT", 00:26:13.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.455 "hdgst": ${hdgst:-false}, 00:26:13.455 "ddgst": ${ddgst:-false} 00:26:13.455 }, 00:26:13.455 "method": "bdev_nvme_attach_controller" 00:26:13.455 } 00:26:13.455 EOF 00:26:13.455 )") 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:13.455 "params": { 00:26:13.455 "name": "Nvme0", 00:26:13.455 "trtype": "tcp", 00:26:13.455 "traddr": "10.0.0.2", 00:26:13.455 "adrfam": "ipv4", 00:26:13.455 "trsvcid": "4420", 00:26:13.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.455 "hdgst": false, 00:26:13.455 "ddgst": false 00:26:13.455 }, 00:26:13.455 "method": "bdev_nvme_attach_controller" 00:26:13.455 },{ 00:26:13.455 "params": { 00:26:13.455 "name": "Nvme1", 00:26:13.455 "trtype": "tcp", 00:26:13.455 "traddr": "10.0.0.2", 00:26:13.455 "adrfam": "ipv4", 00:26:13.455 "trsvcid": "4420", 00:26:13.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.455 "hdgst": false, 00:26:13.455 "ddgst": false 00:26:13.455 }, 00:26:13.455 "method": "bdev_nvme_attach_controller" 00:26:13.455 }' 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:13.455 23:52:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.455 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.455 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.455 fio-3.35 00:26:13.455 Starting 2 threads 00:26:13.455 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.422 00:26:23.422 filename0: (groupid=0, jobs=1): err= 0: pid=3899517: Mon Jul 15 23:52:58 2024 00:26:23.422 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:26:23.422 slat (nsec): min=7200, max=28289, avg=9372.76, stdev=2980.31 00:26:23.422 clat (usec): min=40796, max=44839, avg=40986.91, stdev=252.34 00:26:23.422 lat (usec): min=40804, max=44865, avg=40996.28, stdev=252.54 00:26:23.422 clat percentiles (usec): 00:26:23.422 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:26:23.422 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:23.422 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.422 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:26:23.422 | 99.99th=[44827] 00:26:23.422 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:26:23.422 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:23.422 lat (msec) : 50=100.00% 00:26:23.422 cpu : usr=94.40%, sys=5.30%, ctx=15, majf=0, minf=69 00:26:23.422 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.422 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.422 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.422 filename1: (groupid=0, jobs=1): err= 0: pid=3899518: Mon Jul 15 23:52:58 2024 00:26:23.422 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:26:23.422 slat (nsec): min=7307, max=56677, avg=9582.13, stdev=3698.28 00:26:23.422 clat (usec): min=40857, max=43892, avg=40982.51, stdev=188.93 00:26:23.422 lat (usec): min=40865, max=43918, avg=40992.09, stdev=189.12 00:26:23.422 clat percentiles (usec): 00:26:23.422 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:23.422 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:23.422 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.422 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:26:23.422 | 99.99th=[43779] 00:26:23.422 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:26:23.422 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:23.422 lat (msec) : 50=100.00% 00:26:23.422 cpu : usr=94.42%, sys=5.28%, ctx=19, majf=0, minf=200 00:26:23.422 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.422 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.422 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.422 00:26:23.422 Run status group 0 (all jobs): 00:26:23.422 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10007-10008msec 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.422 00:26:23.422 real 0m11.330s 00:26:23.422 user 0m20.147s 00:26:23.422 sys 0m1.370s 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 ************************************ 00:26:23.422 END TEST fio_dif_1_multi_subsystems 00:26:23.422 ************************************ 00:26:23.422 23:52:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:23.422 23:52:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:23.422 23:52:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.422 23:52:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.422 23:52:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 ************************************ 00:26:23.422 START TEST fio_dif_rand_params 00:26:23.422 ************************************ 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:23.422 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.423 bdev_null0 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:23.423 [2024-07-15 23:52:58.537440] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.423 { 00:26:23.423 "params": { 00:26:23.423 "name": "Nvme$subsystem", 00:26:23.423 "trtype": "$TEST_TRANSPORT", 00:26:23.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.423 "adrfam": "ipv4", 00:26:23.423 "trsvcid": "$NVMF_PORT", 00:26:23.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.423 "hdgst": ${hdgst:-false}, 00:26:23.423 "ddgst": ${ddgst:-false} 00:26:23.423 }, 00:26:23.423 "method": "bdev_nvme_attach_controller" 00:26:23.423 } 00:26:23.423 EOF 00:26:23.423 )") 00:26:23.423 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.683 "params": { 00:26:23.683 "name": "Nvme0", 00:26:23.683 "trtype": "tcp", 00:26:23.683 "traddr": "10.0.0.2", 00:26:23.683 "adrfam": "ipv4", 00:26:23.683 "trsvcid": "4420", 00:26:23.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.683 "hdgst": false, 00:26:23.683 "ddgst": false 00:26:23.683 }, 00:26:23.683 "method": "bdev_nvme_attach_controller" 00:26:23.683 }' 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:23.683 23:52:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.683 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:23.683 ... 00:26:23.683 fio-3.35 00:26:23.683 Starting 3 threads 00:26:23.941 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.545 00:26:30.545 filename0: (groupid=0, jobs=1): err= 0: pid=3900922: Mon Jul 15 23:53:04 2024 00:26:30.545 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(155MiB/5005msec) 00:26:30.545 slat (nsec): min=4550, max=49232, avg=19752.71, stdev=4167.86 00:26:30.545 clat (usec): min=4944, max=55218, avg=12112.35, stdev=3569.49 00:26:30.545 lat (usec): min=4956, max=55231, avg=12132.10, stdev=3569.75 00:26:30.545 clat percentiles (usec): 00:26:30.545 | 1.00th=[ 5669], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[10552], 00:26:30.545 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:26:30.545 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14353], 95.00th=[15139], 00:26:30.545 | 99.00th=[16909], 99.50th=[17957], 99.90th=[54789], 99.95th=[55313], 00:26:30.545 | 99.99th=[55313] 00:26:30.545 bw ( KiB/s): min=29952, max=35655, per=36.04%, avg=31597.50, stdev=1812.45, samples=10 00:26:30.545 iops : min= 234, max= 278, avg=246.80, stdev=14.02, samples=10 00:26:30.545 lat (msec) : 10=15.28%, 20=84.24%, 100=0.49% 00:26:30.545 cpu : usr=94.30%, sys=5.16%, ctx=23, majf=0, minf=145 00:26:30.545 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 issued rwts: total=1237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:30.545 filename0: (groupid=0, jobs=1): err= 0: pid=3900923: Mon Jul 15 23:53:04 2024 00:26:30.545 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(137MiB/5046msec) 00:26:30.545 slat (nsec): min=4360, max=97899, avg=15933.59, stdev=4824.22 00:26:30.545 clat (usec): min=5263, max=56356, avg=13802.26, stdev=6563.28 00:26:30.545 lat (usec): min=5275, max=56383, avg=13818.19, stdev=6563.14 00:26:30.545 clat percentiles (usec): 00:26:30.545 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:26:30.545 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:26:30.545 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15401], 95.00th=[16319], 00:26:30.545 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55313], 99.95th=[56361], 00:26:30.545 | 99.99th=[56361] 00:26:30.545 bw ( KiB/s): min=22272, max=30464, per=31.83%, avg=27904.00, stdev=2863.44, samples=10 00:26:30.545 iops : min= 174, max= 238, avg=218.00, stdev=22.37, samples=10 00:26:30.545 lat (msec) : 10=6.50%, 20=90.84%, 50=0.46%, 100=2.20% 00:26:30.545 cpu : usr=93.76%, sys=5.79%, ctx=10, majf=0, minf=104 00:26:30.545 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:30.545 filename0: (groupid=0, jobs=1): err= 0: pid=3900924: Mon Jul 15 23:53:04 2024 00:26:30.545 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(141MiB/5047msec) 00:26:30.545 slat (nsec): min=4932, max=85904, avg=16667.17, stdev=5110.43 00:26:30.545 clat (usec): min=5881, max=53240, avg=13362.69, stdev=5831.89 00:26:30.545 lat (usec): min=5894, max=53254, avg=13379.36, stdev=5831.69 00:26:30.545 clat percentiles (usec): 00:26:30.545 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10945], 00:26:30.545 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:26:30.545 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15139], 95.00th=[16057], 00:26:30.545 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:26:30.545 | 99.99th=[53216] 00:26:30.545 bw ( KiB/s): min=19200, max=32000, per=32.85%, avg=28800.00, stdev=3704.39, samples=10 00:26:30.545 iops : min= 150, max= 250, avg=225.00, stdev=28.94, samples=10 00:26:30.545 lat (msec) : 10=9.49%, 20=88.21%, 50=0.62%, 100=1.68% 00:26:30.545 cpu : usr=93.68%, sys=5.85%, ctx=10, majf=0, minf=112 00:26:30.545 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.545 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:30.545 00:26:30.545 Run status group 0 (all jobs): 00:26:30.545 READ: bw=85.6MiB/s (89.8MB/s), 27.1MiB/s-30.9MiB/s (28.4MB/s-32.4MB/s), io=432MiB (453MB), run=5005-5047msec 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.545 bdev_null0 00:26:30.545 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 [2024-07-15 23:53:04.752092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 bdev_null1 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 bdev_null2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.546 { 00:26:30.546 "params": { 00:26:30.546 "name": "Nvme$subsystem", 00:26:30.546 "trtype": "$TEST_TRANSPORT", 00:26:30.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.546 "adrfam": "ipv4", 00:26:30.546 "trsvcid": "$NVMF_PORT", 00:26:30.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.546 "hdgst": ${hdgst:-false}, 00:26:30.546 "ddgst": ${ddgst:-false} 00:26:30.546 }, 00:26:30.546 "method": "bdev_nvme_attach_controller" 00:26:30.546 } 00:26:30.546 EOF 00:26:30.546 )") 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.546 { 00:26:30.546 "params": { 00:26:30.546 "name": "Nvme$subsystem", 00:26:30.546 "trtype": "$TEST_TRANSPORT", 00:26:30.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.546 "adrfam": "ipv4", 00:26:30.546 "trsvcid": "$NVMF_PORT", 00:26:30.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.546 "hdgst": ${hdgst:-false}, 00:26:30.546 "ddgst": ${ddgst:-false} 00:26:30.546 }, 00:26:30.546 "method": "bdev_nvme_attach_controller" 00:26:30.546 } 00:26:30.546 EOF 00:26:30.546 )") 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.546 { 00:26:30.546 "params": { 00:26:30.546 "name": "Nvme$subsystem", 00:26:30.546 "trtype": "$TEST_TRANSPORT", 00:26:30.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.546 "adrfam": "ipv4", 00:26:30.546 "trsvcid": "$NVMF_PORT", 00:26:30.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.546 "hdgst": ${hdgst:-false}, 00:26:30.546 "ddgst": ${ddgst:-false} 00:26:30.546 }, 00:26:30.546 "method": "bdev_nvme_attach_controller" 00:26:30.546 } 00:26:30.546 EOF 00:26:30.546 )") 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:30.546 23:53:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.546 "params": { 00:26:30.546 "name": "Nvme0", 00:26:30.546 "trtype": "tcp", 00:26:30.546 "traddr": "10.0.0.2", 00:26:30.546 "adrfam": "ipv4", 00:26:30.546 "trsvcid": "4420", 00:26:30.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.546 "hdgst": false, 00:26:30.546 "ddgst": false 00:26:30.546 }, 00:26:30.546 "method": "bdev_nvme_attach_controller" 00:26:30.546 },{ 00:26:30.546 "params": { 00:26:30.546 "name": "Nvme1", 00:26:30.546 "trtype": "tcp", 00:26:30.546 "traddr": "10.0.0.2", 00:26:30.546 "adrfam": "ipv4", 00:26:30.546 "trsvcid": "4420", 00:26:30.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.546 "hdgst": false, 00:26:30.546 "ddgst": false 00:26:30.547 }, 00:26:30.547 "method": "bdev_nvme_attach_controller" 00:26:30.547 },{ 00:26:30.547 "params": { 00:26:30.547 "name": "Nvme2", 00:26:30.547 "trtype": "tcp", 00:26:30.547 "traddr": "10.0.0.2", 00:26:30.547 "adrfam": "ipv4", 00:26:30.547 "trsvcid": "4420", 00:26:30.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.547 "hdgst": false, 00:26:30.547 "ddgst": false 00:26:30.547 }, 00:26:30.547 "method": "bdev_nvme_attach_controller" 00:26:30.547 }' 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:30.547 23:53:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.547 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.547 ... 00:26:30.547 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.547 ... 00:26:30.547 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:30.547 ... 00:26:30.547 fio-3.35 00:26:30.547 Starting 24 threads 00:26:30.547 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.755 00:26:42.755 filename0: (groupid=0, jobs=1): err= 0: pid=3901785: Mon Jul 15 23:53:16 2024 00:26:42.755 read: IOPS=76, BW=304KiB/s (311kB/s)(3072KiB/10102msec) 00:26:42.755 slat (nsec): min=8334, max=99086, avg=15433.79, stdev=14056.37 00:26:42.755 clat (msec): min=110, max=361, avg=210.07, stdev=43.52 00:26:42.755 lat (msec): min=110, max=361, avg=210.09, stdev=43.52 00:26:42.755 clat percentiles (msec): 00:26:42.755 | 1.00th=[ 121], 5.00th=[ 138], 10.00th=[ 163], 20.00th=[ 178], 00:26:42.755 | 30.00th=[ 182], 40.00th=[ 197], 50.00th=[ 209], 60.00th=[ 220], 00:26:42.755 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 257], 95.00th=[ 292], 00:26:42.755 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:26:42.755 | 99.99th=[ 363] 00:26:42.755 bw ( KiB/s): min= 256, max= 496, per=4.89%, avg=300.80, stdev=59.55, samples=20 00:26:42.755 iops : min= 64, max= 124, avg=75.20, stdev=14.89, samples=20 00:26:42.755 lat (msec) : 250=86.72%, 500=13.28% 00:26:42.755 cpu : usr=98.14%, sys=1.28%, ctx=37, majf=0, minf=26 00:26:42.755 IO depths : 1=0.5%, 2=1.8%, 4=9.8%, 8=75.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:42.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.755 filename0: (groupid=0, jobs=1): err= 0: pid=3901786: Mon Jul 15 23:53:16 2024 00:26:42.755 read: IOPS=75, BW=302KiB/s (309kB/s)(3048KiB/10109msec) 00:26:42.755 slat (usec): min=6, max=111, avg=38.24, stdev=30.22 00:26:42.755 clat (msec): min=118, max=356, avg=211.87, stdev=42.54 00:26:42.755 lat (msec): min=118, max=356, avg=211.91, stdev=42.55 00:26:42.755 clat percentiles (msec): 00:26:42.755 | 1.00th=[ 123], 5.00th=[ 142], 10.00th=[ 163], 20.00th=[ 178], 00:26:42.755 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 207], 60.00th=[ 226], 00:26:42.755 | 70.00th=[ 236], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 288], 00:26:42.755 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:26:42.755 | 99.99th=[ 359] 00:26:42.755 bw ( KiB/s): min= 224, max= 384, per=4.85%, avg=298.40, stdev=56.22, samples=20 00:26:42.755 iops : min= 56, max= 96, avg=74.60, stdev=14.05, samples=20 00:26:42.755 lat (msec) : 250=87.40%, 500=12.60% 00:26:42.755 cpu : usr=98.09%, sys=1.39%, ctx=25, majf=0, minf=34 00:26:42.755 IO depths : 1=1.6%, 2=4.5%, 4=14.6%, 8=68.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:42.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 issued rwts: total=762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.755 filename0: (groupid=0, jobs=1): err= 0: pid=3901787: Mon Jul 15 23:53:16 2024 00:26:42.755 read: IOPS=82, BW=330KiB/s (338kB/s)(3344KiB/10122msec) 00:26:42.755 slat (usec): min=6, max=127, avg=34.94, stdev=27.86 00:26:42.755 clat (msec): min=5, max=329, avg=192.65, stdev=60.00 00:26:42.755 lat (msec): min=5, max=329, avg=192.69, stdev=60.00 00:26:42.755 clat percentiles (msec): 00:26:42.755 | 1.00th=[ 6], 5.00th=[ 51], 10.00th=[ 105], 20.00th=[ 163], 00:26:42.755 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 207], 60.00th=[ 220], 00:26:42.755 | 70.00th=[ 230], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 257], 00:26:42.755 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:26:42.755 | 99.99th=[ 330] 00:26:42.755 bw ( KiB/s): min= 256, max= 768, per=5.37%, avg=330.40, stdev=111.61, samples=20 00:26:42.755 iops : min= 64, max= 192, avg=82.60, stdev=27.90, samples=20 00:26:42.755 lat (msec) : 10=1.91%, 20=1.91%, 100=5.74%, 250=84.21%, 500=6.22% 00:26:42.755 cpu : usr=98.02%, sys=1.45%, ctx=45, majf=0, minf=63 00:26:42.755 IO depths : 1=1.1%, 2=2.5%, 4=10.3%, 8=74.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:42.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 complete : 0=0.0%, 4=89.8%, 8=4.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.755 issued rwts: total=836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.755 filename0: (groupid=0, jobs=1): err= 0: pid=3901788: Mon Jul 15 23:53:16 2024 00:26:42.755 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10087msec) 00:26:42.755 slat (usec): min=9, max=102, avg=67.53, stdev=22.30 00:26:42.755 clat (msec): min=119, max=508, avg=287.47, stdev=60.95 00:26:42.755 lat (msec): min=119, max=508, avg=287.53, stdev=60.95 00:26:42.755 clat percentiles (msec): 00:26:42.755 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 234], 00:26:42.755 | 30.00th=[ 266], 40.00th=[ 279], 50.00th=[ 300], 60.00th=[ 309], 00:26:42.755 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 393], 00:26:42.755 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 510], 99.95th=[ 510], 00:26:42.755 | 99.99th=[ 510] 00:26:42.755 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=217.60, stdev=73.12, samples=20 00:26:42.755 iops : min= 32, max= 96, avg=54.40, stdev=18.28, samples=20 00:26:42.755 lat (msec) : 250=21.07%, 500=78.57%, 750=0.36% 00:26:42.756 cpu : usr=97.88%, sys=1.42%, ctx=26, majf=0, minf=29 00:26:42.756 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename0: (groupid=0, jobs=1): err= 0: pid=3901789: Mon Jul 15 23:53:16 2024 00:26:42.756 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10081msec) 00:26:42.756 slat (usec): min=8, max=111, avg=34.83, stdev=23.42 00:26:42.756 clat (msec): min=146, max=446, avg=265.00, stdev=52.54 00:26:42.756 lat (msec): min=146, max=446, avg=265.03, stdev=52.53 00:26:42.756 clat percentiles (msec): 00:26:42.756 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 215], 00:26:42.756 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 288], 00:26:42.756 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 330], 00:26:42.756 | 99.00th=[ 363], 99.50th=[ 393], 99.90th=[ 447], 99.95th=[ 447], 00:26:42.756 | 99.99th=[ 447] 00:26:42.756 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=236.80, stdev=62.64, samples=20 00:26:42.756 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:26:42.756 lat (msec) : 250=35.20%, 500=64.80% 00:26:42.756 cpu : usr=98.32%, sys=1.17%, ctx=30, majf=0, minf=23 00:26:42.756 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename0: (groupid=0, jobs=1): err= 0: pid=3901790: Mon Jul 15 23:53:16 2024 00:26:42.756 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10102msec) 00:26:42.756 slat (usec): min=9, max=112, avg=33.88, stdev=16.97 00:26:42.756 clat (msec): min=150, max=486, avg=265.41, stdev=52.77 00:26:42.756 lat (msec): min=150, max=486, avg=265.45, stdev=52.77 00:26:42.756 clat percentiles (msec): 00:26:42.756 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 232], 00:26:42.756 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 279], 00:26:42.756 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 347], 00:26:42.756 | 99.00th=[ 372], 99.50th=[ 409], 99.90th=[ 489], 99.95th=[ 489], 00:26:42.756 | 99.99th=[ 489] 00:26:42.756 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=236.80, stdev=59.55, samples=20 00:26:42.756 iops : min= 32, max= 96, avg=59.20, stdev=14.89, samples=20 00:26:42.756 lat (msec) : 250=37.50%, 500=62.50% 00:26:42.756 cpu : usr=97.85%, sys=1.69%, ctx=17, majf=0, minf=24 00:26:42.756 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename0: (groupid=0, jobs=1): err= 0: pid=3901791: Mon Jul 15 23:53:16 2024 00:26:42.756 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10103msec) 00:26:42.756 slat (usec): min=9, max=121, avg=72.54, stdev=16.50 00:26:42.756 clat (msec): min=143, max=508, avg=288.05, stdev=70.73 00:26:42.756 lat (msec): min=143, max=508, avg=288.13, stdev=70.73 00:26:42.756 clat percentiles (msec): 00:26:42.756 | 1.00th=[ 153], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 232], 00:26:42.756 | 30.00th=[ 264], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 309], 00:26:42.756 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 380], 95.00th=[ 401], 00:26:42.756 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:26:42.756 | 99.99th=[ 510] 00:26:42.756 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=217.60, stdev=70.49, samples=20 00:26:42.756 iops : min= 32, max= 96, avg=54.40, stdev=17.62, samples=20 00:26:42.756 lat (msec) : 250=27.14%, 500=71.79%, 750=1.07% 00:26:42.756 cpu : usr=98.03%, sys=1.39%, ctx=20, majf=0, minf=22 00:26:42.756 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename0: (groupid=0, jobs=1): err= 0: pid=3901792: Mon Jul 15 23:53:16 2024 00:26:42.756 read: IOPS=54, BW=217KiB/s (222kB/s)(2176KiB/10049msec) 00:26:42.756 slat (usec): min=19, max=117, avg=69.09, stdev=16.74 00:26:42.756 clat (msec): min=168, max=474, avg=294.94, stdev=60.82 00:26:42.756 lat (msec): min=168, max=475, avg=295.01, stdev=60.82 00:26:42.756 clat percentiles (msec): 00:26:42.756 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 197], 20.00th=[ 245], 00:26:42.756 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 300], 60.00th=[ 321], 00:26:42.756 | 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 359], 95.00th=[ 397], 00:26:42.756 | 99.00th=[ 405], 99.50th=[ 472], 99.90th=[ 477], 99.95th=[ 477], 00:26:42.756 | 99.99th=[ 477] 00:26:42.756 bw ( KiB/s): min= 128, max= 384, per=3.44%, avg=211.20, stdev=72.60, samples=20 00:26:42.756 iops : min= 32, max= 96, avg=52.80, stdev=18.15, samples=20 00:26:42.756 lat (msec) : 250=21.32%, 500=78.68% 00:26:42.756 cpu : usr=97.95%, sys=1.38%, ctx=49, majf=0, minf=26 00:26:42.756 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename1: (groupid=0, jobs=1): err= 0: pid=3901793: Mon Jul 15 23:53:16 2024 00:26:42.756 read: IOPS=74, BW=297KiB/s (304kB/s)(3008KiB/10122msec) 00:26:42.756 slat (usec): min=6, max=139, avg=35.99, stdev=28.76 00:26:42.756 clat (msec): min=5, max=351, avg=214.41, stdev=67.12 00:26:42.756 lat (msec): min=5, max=352, avg=214.45, stdev=67.13 00:26:42.756 clat percentiles (msec): 00:26:42.756 | 1.00th=[ 6], 5.00th=[ 51], 10.00th=[ 161], 20.00th=[ 178], 00:26:42.756 | 30.00th=[ 188], 40.00th=[ 222], 50.00th=[ 234], 60.00th=[ 241], 00:26:42.756 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 296], 00:26:42.756 | 99.00th=[ 321], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:26:42.756 | 99.99th=[ 351] 00:26:42.756 bw ( KiB/s): min= 144, max= 768, per=4.79%, avg=294.40, stdev=123.00, samples=20 00:26:42.756 iops : min= 36, max= 192, avg=73.60, stdev=30.75, samples=20 00:26:42.756 lat (msec) : 10=2.13%, 20=2.13%, 100=4.26%, 250=62.77%, 500=28.72% 00:26:42.756 cpu : usr=97.98%, sys=1.33%, ctx=124, majf=0, minf=27 00:26:42.756 IO depths : 1=2.1%, 2=8.4%, 4=25.0%, 8=54.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:42.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.756 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.756 filename1: (groupid=0, jobs=1): err= 0: pid=3901794: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=58, BW=234KiB/s (240kB/s)(2360KiB/10084msec) 00:26:42.757 slat (usec): min=8, max=109, avg=42.37, stdev=27.92 00:26:42.757 clat (msec): min=99, max=475, avg=272.90, stdev=57.36 00:26:42.757 lat (msec): min=99, max=475, avg=272.94, stdev=57.37 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 215], 00:26:42.757 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 271], 60.00th=[ 292], 00:26:42.757 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 351], 00:26:42.757 | 99.00th=[ 397], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 477], 00:26:42.757 | 99.99th=[ 477] 00:26:42.757 bw ( KiB/s): min= 128, max= 384, per=3.73%, avg=229.60, stdev=65.31, samples=20 00:26:42.757 iops : min= 32, max= 96, avg=57.40, stdev=16.33, samples=20 00:26:42.757 lat (msec) : 100=0.34%, 250=31.86%, 500=67.80% 00:26:42.757 cpu : usr=98.17%, sys=1.39%, ctx=34, majf=0, minf=32 00:26:42.757 IO depths : 1=5.1%, 2=11.4%, 4=25.1%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:42.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.757 filename1: (groupid=0, jobs=1): err= 0: pid=3901795: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=74, BW=299KiB/s (306kB/s)(3016KiB/10102msec) 00:26:42.757 slat (nsec): min=8320, max=62616, avg=15278.23, stdev=8115.94 00:26:42.757 clat (msec): min=120, max=383, avg=213.96, stdev=43.57 00:26:42.757 lat (msec): min=120, max=383, avg=213.97, stdev=43.57 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 121], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 176], 00:26:42.757 | 30.00th=[ 192], 40.00th=[ 207], 50.00th=[ 215], 60.00th=[ 230], 00:26:42.757 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 271], 95.00th=[ 284], 00:26:42.757 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:26:42.757 | 99.99th=[ 384] 00:26:42.757 bw ( KiB/s): min= 256, max= 512, per=4.80%, avg=295.15, stdev=67.71, samples=20 00:26:42.757 iops : min= 64, max= 128, avg=73.75, stdev=16.88, samples=20 00:26:42.757 lat (msec) : 250=83.55%, 500=16.45% 00:26:42.757 cpu : usr=98.30%, sys=1.29%, ctx=16, majf=0, minf=27 00:26:42.757 IO depths : 1=2.7%, 2=6.1%, 4=16.4%, 8=64.9%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:42.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 complete : 0=0.0%, 4=91.6%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.757 filename1: (groupid=0, jobs=1): err= 0: pid=3901796: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10109msec) 00:26:42.757 slat (usec): min=12, max=110, avg=26.87, stdev=10.45 00:26:42.757 clat (msec): min=121, max=349, avg=258.82, stdev=55.36 00:26:42.757 lat (msec): min=121, max=349, avg=258.84, stdev=55.36 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 142], 5.00th=[ 150], 10.00th=[ 176], 20.00th=[ 188], 00:26:42.757 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 279], 00:26:42.757 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 330], 00:26:42.757 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:26:42.757 | 99.99th=[ 351] 00:26:42.757 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=243.20, stdev=68.00, samples=20 00:26:42.757 iops : min= 32, max= 96, avg=60.80, stdev=17.00, samples=20 00:26:42.757 lat (msec) : 250=41.03%, 500=58.97% 00:26:42.757 cpu : usr=97.28%, sys=1.80%, ctx=38, majf=0, minf=33 00:26:42.757 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:42.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.757 filename1: (groupid=0, jobs=1): err= 0: pid=3901797: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10087msec) 00:26:42.757 slat (usec): min=8, max=100, avg=44.77, stdev=28.86 00:26:42.757 clat (msec): min=119, max=473, avg=287.80, stdev=62.09 00:26:42.757 lat (msec): min=119, max=473, avg=287.84, stdev=62.08 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 215], 00:26:42.757 | 30.00th=[ 266], 40.00th=[ 279], 50.00th=[ 305], 60.00th=[ 309], 00:26:42.757 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 397], 00:26:42.757 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 472], 99.95th=[ 472], 00:26:42.757 | 99.99th=[ 472] 00:26:42.757 bw ( KiB/s): min= 128, max= 368, per=3.53%, avg=217.60, stdev=71.82, samples=20 00:26:42.757 iops : min= 32, max= 92, avg=54.40, stdev=17.95, samples=20 00:26:42.757 lat (msec) : 250=20.36%, 500=79.64% 00:26:42.757 cpu : usr=98.23%, sys=1.37%, ctx=17, majf=0, minf=23 00:26:42.757 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:42.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.757 filename1: (groupid=0, jobs=1): err= 0: pid=3901798: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10102msec) 00:26:42.757 slat (nsec): min=8756, max=71787, avg=22988.29, stdev=9983.82 00:26:42.757 clat (msec): min=141, max=412, avg=246.20, stdev=53.73 00:26:42.757 lat (msec): min=141, max=412, avg=246.23, stdev=53.73 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 142], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 188], 00:26:42.757 | 30.00th=[ 232], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 259], 00:26:42.757 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 313], 95.00th=[ 326], 00:26:42.757 | 99.00th=[ 363], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 414], 00:26:42.757 | 99.99th=[ 414] 00:26:42.757 bw ( KiB/s): min= 128, max= 384, per=4.15%, avg=255.95, stdev=69.16, samples=20 00:26:42.757 iops : min= 32, max= 96, avg=63.95, stdev=17.22, samples=20 00:26:42.757 lat (msec) : 250=48.78%, 500=51.22% 00:26:42.757 cpu : usr=98.29%, sys=1.34%, ctx=15, majf=0, minf=35 00:26:42.757 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:42.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.757 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.757 filename1: (groupid=0, jobs=1): err= 0: pid=3901799: Mon Jul 15 23:53:16 2024 00:26:42.757 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10093msec) 00:26:42.757 slat (nsec): min=8965, max=55860, avg=26717.16, stdev=9689.74 00:26:42.757 clat (msec): min=99, max=443, avg=265.37, stdev=52.54 00:26:42.757 lat (msec): min=99, max=443, avg=265.40, stdev=52.54 00:26:42.757 clat percentiles (msec): 00:26:42.757 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 222], 00:26:42.757 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 264], 60.00th=[ 279], 00:26:42.757 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 338], 00:26:42.758 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 443], 99.95th=[ 443], 00:26:42.758 | 99.99th=[ 443] 00:26:42.758 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=236.80, stdev=61.11, samples=20 00:26:42.758 iops : min= 32, max= 96, avg=59.20, stdev=15.28, samples=20 00:26:42.758 lat (msec) : 100=0.33%, 250=37.34%, 500=62.34% 00:26:42.758 cpu : usr=98.15%, sys=1.49%, ctx=28, majf=0, minf=37 00:26:42.758 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:42.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.758 filename1: (groupid=0, jobs=1): err= 0: pid=3901800: Mon Jul 15 23:53:16 2024 00:26:42.758 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10103msec) 00:26:42.758 slat (nsec): min=9103, max=97165, avg=41351.27, stdev=23065.25 00:26:42.758 clat (msec): min=99, max=437, avg=258.70, stdev=56.31 00:26:42.758 lat (msec): min=99, max=438, avg=258.75, stdev=56.31 00:26:42.758 clat percentiles (msec): 00:26:42.758 | 1.00th=[ 136], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 209], 00:26:42.758 | 30.00th=[ 234], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 271], 00:26:42.758 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 338], 00:26:42.758 | 99.00th=[ 363], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 439], 00:26:42.758 | 99.99th=[ 439] 00:26:42.758 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=243.20, stdev=79.51, samples=20 00:26:42.758 iops : min= 32, max= 96, avg=60.80, stdev=19.88, samples=20 00:26:42.758 lat (msec) : 100=0.32%, 250=37.50%, 500=62.18% 00:26:42.758 cpu : usr=98.24%, sys=1.35%, ctx=15, majf=0, minf=27 00:26:42.758 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:42.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.758 filename2: (groupid=0, jobs=1): err= 0: pid=3901801: Mon Jul 15 23:53:16 2024 00:26:42.758 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10068msec) 00:26:42.758 slat (usec): min=12, max=107, avg=69.58, stdev=19.03 00:26:42.758 clat (msec): min=103, max=486, avg=287.07, stdev=68.80 00:26:42.758 lat (msec): min=103, max=486, avg=287.14, stdev=68.81 00:26:42.758 clat percentiles (msec): 00:26:42.758 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 224], 00:26:42.758 | 30.00th=[ 251], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 313], 00:26:42.758 | 70.00th=[ 326], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 397], 00:26:42.758 | 99.00th=[ 477], 99.50th=[ 481], 99.90th=[ 489], 99.95th=[ 489], 00:26:42.758 | 99.99th=[ 489] 00:26:42.758 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=217.55, stdev=71.79, samples=20 00:26:42.758 iops : min= 32, max= 96, avg=54.35, stdev=17.93, samples=20 00:26:42.758 lat (msec) : 250=28.21%, 500=71.79% 00:26:42.758 cpu : usr=98.23%, sys=1.30%, ctx=23, majf=0, minf=19 00:26:42.758 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:42.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.758 filename2: (groupid=0, jobs=1): err= 0: pid=3901802: Mon Jul 15 23:53:16 2024 00:26:42.758 read: IOPS=80, BW=322KiB/s (330kB/s)(3264KiB/10130msec) 00:26:42.758 slat (usec): min=4, max=111, avg=21.71, stdev=19.60 00:26:42.758 clat (msec): min=5, max=312, avg=197.70, stdev=56.77 00:26:42.758 lat (msec): min=5, max=312, avg=197.72, stdev=56.77 00:26:42.758 clat percentiles (msec): 00:26:42.758 | 1.00th=[ 6], 5.00th=[ 52], 10.00th=[ 161], 20.00th=[ 171], 00:26:42.758 | 30.00th=[ 180], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 226], 00:26:42.758 | 70.00th=[ 236], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 259], 00:26:42.758 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 313], 99.95th=[ 313], 00:26:42.758 | 99.99th=[ 313] 00:26:42.758 bw ( KiB/s): min= 256, max= 769, per=5.21%, avg=320.05, stdev=118.91, samples=20 00:26:42.758 iops : min= 64, max= 192, avg=80.00, stdev=29.68, samples=20 00:26:42.758 lat (msec) : 10=1.96%, 20=1.96%, 100=3.92%, 250=80.88%, 500=11.27% 00:26:42.758 cpu : usr=98.21%, sys=1.31%, ctx=26, majf=0, minf=31 00:26:42.758 IO depths : 1=1.1%, 2=7.1%, 4=24.0%, 8=56.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:42.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.758 filename2: (groupid=0, jobs=1): err= 0: pid=3901803: Mon Jul 15 23:53:16 2024 00:26:42.758 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10129msec) 00:26:42.758 slat (usec): min=6, max=127, avg=55.00, stdev=24.13 00:26:42.758 clat (msec): min=5, max=462, avg=246.46, stdev=91.12 00:26:42.758 lat (msec): min=5, max=462, avg=246.51, stdev=91.13 00:26:42.758 clat percentiles (msec): 00:26:42.758 | 1.00th=[ 6], 5.00th=[ 51], 10.00th=[ 153], 20.00th=[ 178], 00:26:42.758 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 264], 60.00th=[ 279], 00:26:42.758 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 347], 95.00th=[ 368], 00:26:42.758 | 99.00th=[ 405], 99.50th=[ 451], 99.90th=[ 464], 99.95th=[ 464], 00:26:42.758 | 99.99th=[ 464] 00:26:42.758 bw ( KiB/s): min= 128, max= 768, per=4.15%, avg=256.00, stdev=135.67, samples=20 00:26:42.758 iops : min= 32, max= 192, avg=64.00, stdev=33.92, samples=20 00:26:42.758 lat (msec) : 10=2.44%, 20=2.44%, 100=4.88%, 250=33.84%, 500=56.40% 00:26:42.758 cpu : usr=98.18%, sys=1.30%, ctx=36, majf=0, minf=27 00:26:42.758 IO depths : 1=3.0%, 2=9.1%, 4=24.5%, 8=53.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:42.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.758 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.758 filename2: (groupid=0, jobs=1): err= 0: pid=3901804: Mon Jul 15 23:53:16 2024 00:26:42.758 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10109msec) 00:26:42.758 slat (usec): min=7, max=120, avg=41.98, stdev=27.90 00:26:42.758 clat (msec): min=98, max=467, avg=258.71, stdev=58.78 00:26:42.758 lat (msec): min=98, max=467, avg=258.75, stdev=58.78 00:26:42.758 clat percentiles (msec): 00:26:42.758 | 1.00th=[ 142], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 209], 00:26:42.758 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 275], 00:26:42.758 | 70.00th=[ 288], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 347], 00:26:42.758 | 99.00th=[ 422], 99.50th=[ 451], 99.90th=[ 468], 99.95th=[ 468], 00:26:42.758 | 99.99th=[ 468] 00:26:42.758 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=243.20, stdev=68.00, samples=20 00:26:42.758 iops : min= 32, max= 96, avg=60.80, stdev=17.00, samples=20 00:26:42.758 lat (msec) : 100=0.32%, 250=38.46%, 500=61.22% 00:26:42.758 cpu : usr=97.77%, sys=1.50%, ctx=73, majf=0, minf=23 00:26:42.758 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:42.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.759 filename2: (groupid=0, jobs=1): err= 0: pid=3901805: Mon Jul 15 23:53:16 2024 00:26:42.759 read: IOPS=55, BW=223KiB/s (228kB/s)(2240KiB/10046msec) 00:26:42.759 slat (usec): min=8, max=110, avg=24.29, stdev=18.89 00:26:42.759 clat (msec): min=99, max=475, avg=286.81, stdev=66.83 00:26:42.759 lat (msec): min=99, max=475, avg=286.83, stdev=66.82 00:26:42.759 clat percentiles (msec): 00:26:42.759 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 215], 00:26:42.759 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 288], 60.00th=[ 313], 00:26:42.759 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 388], 00:26:42.759 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:26:42.759 | 99.99th=[ 477] 00:26:42.759 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=217.60, stdev=70.49, samples=20 00:26:42.759 iops : min= 32, max= 96, avg=54.40, stdev=17.62, samples=20 00:26:42.759 lat (msec) : 100=0.36%, 250=20.71%, 500=78.93% 00:26:42.759 cpu : usr=97.82%, sys=1.47%, ctx=44, majf=0, minf=32 00:26:42.759 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:42.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.759 filename2: (groupid=0, jobs=1): err= 0: pid=3901806: Mon Jul 15 23:53:16 2024 00:26:42.759 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10090msec) 00:26:42.759 slat (nsec): min=8710, max=93757, avg=32010.68, stdev=19284.40 00:26:42.759 clat (msec): min=119, max=471, avg=265.12, stdev=58.00 00:26:42.759 lat (msec): min=119, max=471, avg=265.15, stdev=58.00 00:26:42.759 clat percentiles (msec): 00:26:42.759 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 205], 00:26:42.759 | 30.00th=[ 234], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 279], 00:26:42.759 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 330], 00:26:42.759 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 472], 99.95th=[ 472], 00:26:42.759 | 99.99th=[ 472] 00:26:42.759 bw ( KiB/s): min= 128, max= 368, per=3.84%, avg=236.80, stdev=57.95, samples=20 00:26:42.759 iops : min= 32, max= 92, avg=59.20, stdev=14.49, samples=20 00:26:42.759 lat (msec) : 250=35.86%, 500=64.14% 00:26:42.759 cpu : usr=98.17%, sys=1.30%, ctx=13, majf=0, minf=30 00:26:42.759 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:42.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.759 filename2: (groupid=0, jobs=1): err= 0: pid=3901807: Mon Jul 15 23:53:16 2024 00:26:42.759 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10109msec) 00:26:42.759 slat (usec): min=7, max=105, avg=25.58, stdev=23.47 00:26:42.759 clat (msec): min=117, max=378, avg=224.32, stdev=52.17 00:26:42.759 lat (msec): min=118, max=378, avg=224.35, stdev=52.17 00:26:42.759 clat percentiles (msec): 00:26:42.759 | 1.00th=[ 118], 5.00th=[ 142], 10.00th=[ 169], 20.00th=[ 178], 00:26:42.759 | 30.00th=[ 192], 40.00th=[ 207], 50.00th=[ 220], 60.00th=[ 232], 00:26:42.759 | 70.00th=[ 245], 80.00th=[ 264], 90.00th=[ 288], 95.00th=[ 321], 00:26:42.759 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:26:42.759 | 99.99th=[ 380] 00:26:42.759 bw ( KiB/s): min= 224, max= 384, per=4.58%, avg=281.60, stdev=42.93, samples=20 00:26:42.759 iops : min= 56, max= 96, avg=70.40, stdev=10.73, samples=20 00:26:42.759 lat (msec) : 250=72.50%, 500=27.50% 00:26:42.759 cpu : usr=98.26%, sys=1.12%, ctx=42, majf=0, minf=23 00:26:42.759 IO depths : 1=1.8%, 2=4.4%, 4=13.8%, 8=69.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:42.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.759 filename2: (groupid=0, jobs=1): err= 0: pid=3901808: Mon Jul 15 23:53:16 2024 00:26:42.759 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10098msec) 00:26:42.759 slat (usec): min=8, max=124, avg=50.87, stdev=26.65 00:26:42.759 clat (msec): min=150, max=493, avg=272.42, stdev=52.69 00:26:42.759 lat (msec): min=150, max=493, avg=272.47, stdev=52.69 00:26:42.759 clat percentiles (msec): 00:26:42.759 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 236], 00:26:42.759 | 30.00th=[ 249], 40.00th=[ 264], 50.00th=[ 275], 60.00th=[ 288], 00:26:42.759 | 70.00th=[ 292], 80.00th=[ 321], 90.00th=[ 334], 95.00th=[ 351], 00:26:42.759 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 493], 99.95th=[ 493], 00:26:42.759 | 99.99th=[ 493] 00:26:42.759 bw ( KiB/s): min= 128, max= 384, per=3.75%, avg=230.40, stdev=64.08, samples=20 00:26:42.759 iops : min= 32, max= 96, avg=57.60, stdev=16.02, samples=20 00:26:42.759 lat (msec) : 250=31.08%, 500=68.92% 00:26:42.759 cpu : usr=97.53%, sys=1.63%, ctx=92, majf=0, minf=34 00:26:42.759 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:42.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.759 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.759 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:42.759 00:26:42.759 Run status group 0 (all jobs): 00:26:42.759 READ: bw=6140KiB/s (6288kB/s), 217KiB/s-330KiB/s (222kB/s-338kB/s), io=60.7MiB (63.7MB), run=10046-10130msec 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.759 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 bdev_null0 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 [2024-07-15 23:53:16.683730] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 bdev_null1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.760 { 00:26:42.760 "params": { 00:26:42.760 "name": "Nvme$subsystem", 00:26:42.760 "trtype": "$TEST_TRANSPORT", 00:26:42.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.760 "adrfam": "ipv4", 00:26:42.760 "trsvcid": "$NVMF_PORT", 00:26:42.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.760 "hdgst": ${hdgst:-false}, 00:26:42.760 "ddgst": ${ddgst:-false} 00:26:42.760 }, 00:26:42.760 "method": "bdev_nvme_attach_controller" 00:26:42.760 } 00:26:42.760 EOF 00:26:42.760 )") 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:42.760 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:42.761 { 00:26:42.761 "params": { 00:26:42.761 "name": "Nvme$subsystem", 00:26:42.761 "trtype": "$TEST_TRANSPORT", 00:26:42.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:42.761 "adrfam": "ipv4", 00:26:42.761 "trsvcid": "$NVMF_PORT", 00:26:42.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:42.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:42.761 "hdgst": ${hdgst:-false}, 00:26:42.761 "ddgst": ${ddgst:-false} 00:26:42.761 }, 00:26:42.761 "method": "bdev_nvme_attach_controller" 00:26:42.761 } 00:26:42.761 EOF 00:26:42.761 )") 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:42.761 "params": { 00:26:42.761 "name": "Nvme0", 00:26:42.761 "trtype": "tcp", 00:26:42.761 "traddr": "10.0.0.2", 00:26:42.761 "adrfam": "ipv4", 00:26:42.761 "trsvcid": "4420", 00:26:42.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:42.761 "hdgst": false, 00:26:42.761 "ddgst": false 00:26:42.761 }, 00:26:42.761 "method": "bdev_nvme_attach_controller" 00:26:42.761 },{ 00:26:42.761 "params": { 00:26:42.761 "name": "Nvme1", 00:26:42.761 "trtype": "tcp", 00:26:42.761 "traddr": "10.0.0.2", 00:26:42.761 "adrfam": "ipv4", 00:26:42.761 "trsvcid": "4420", 00:26:42.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:42.761 "hdgst": false, 00:26:42.761 "ddgst": false 00:26:42.761 }, 00:26:42.761 "method": "bdev_nvme_attach_controller" 00:26:42.761 }' 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:42.761 23:53:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:42.761 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:42.761 ... 00:26:42.761 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:42.761 ... 00:26:42.761 fio-3.35 00:26:42.761 Starting 4 threads 00:26:42.761 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.021 00:26:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=3903197: Mon Jul 15 23:53:22 2024 00:26:48.021 read: IOPS=1752, BW=13.7MiB/s (14.4MB/s)(68.5MiB/5001msec) 00:26:48.021 slat (nsec): min=6545, max=82194, avg=23706.70, stdev=12426.14 00:26:48.021 clat (usec): min=945, max=8356, avg=4468.09, stdev=477.23 00:26:48.021 lat (usec): min=958, max=8370, avg=4491.79, stdev=477.35 00:26:48.021 clat percentiles (usec): 00:26:48.021 | 1.00th=[ 3032], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4293], 00:26:48.021 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:26:48.021 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4817], 00:26:48.021 | 99.00th=[ 6390], 99.50th=[ 7111], 99.90th=[ 7963], 99.95th=[ 8094], 00:26:48.021 | 99.99th=[ 8356] 00:26:48.021 bw ( KiB/s): min=13616, max=14208, per=24.95%, avg=14014.22, stdev=167.89, samples=9 00:26:48.021 iops : min= 1702, max= 1776, avg=1751.78, stdev=20.99, samples=9 00:26:48.021 lat (usec) : 1000=0.02% 00:26:48.021 lat (msec) : 2=0.52%, 4=2.93%, 10=96.52% 00:26:48.021 cpu : usr=94.44%, sys=4.86%, ctx=8, majf=0, minf=0 00:26:48.021 IO depths : 1=1.0%, 2=23.4%, 4=51.3%, 8=24.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 issued rwts: total=8763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=3903198: Mon Jul 15 23:53:22 2024 00:26:48.021 read: IOPS=1757, BW=13.7MiB/s (14.4MB/s)(68.7MiB/5005msec) 00:26:48.021 slat (nsec): min=6623, max=72631, avg=16564.61, stdev=10863.26 00:26:48.021 clat (usec): min=1065, max=10802, avg=4500.46, stdev=391.92 00:26:48.021 lat (usec): min=1078, max=10834, avg=4517.03, stdev=392.15 00:26:48.021 clat percentiles (usec): 00:26:48.021 | 1.00th=[ 3621], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:26:48.021 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:26:48.021 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4817], 00:26:48.021 | 99.00th=[ 5669], 99.50th=[ 6521], 99.90th=[ 7963], 99.95th=[10814], 00:26:48.021 | 99.99th=[10814] 00:26:48.021 bw ( KiB/s): min=13792, max=14336, per=25.03%, avg=14062.40, stdev=132.47, samples=10 00:26:48.021 iops : min= 1724, max= 1792, avg=1757.80, stdev=16.56, samples=10 00:26:48.021 lat (msec) : 2=0.20%, 4=3.85%, 10=95.85%, 20=0.09% 00:26:48.021 cpu : usr=94.20%, sys=5.24%, ctx=11, majf=0, minf=0 00:26:48.021 IO depths : 1=0.4%, 2=8.7%, 4=64.8%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 issued rwts: total=8797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=3903199: Mon Jul 15 23:53:22 2024 00:26:48.021 read: IOPS=1757, BW=13.7MiB/s (14.4MB/s)(68.7MiB/5001msec) 00:26:48.021 slat (usec): min=7, max=243, avg=24.04, stdev=12.77 00:26:48.021 clat (usec): min=1042, max=7845, avg=4452.76, stdev=476.84 00:26:48.021 lat (usec): min=1054, max=7870, avg=4476.80, stdev=477.57 00:26:48.021 clat percentiles (usec): 00:26:48.021 | 1.00th=[ 2507], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4293], 00:26:48.021 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:26:48.021 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4817], 00:26:48.021 | 99.00th=[ 6456], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 7767], 00:26:48.021 | 99.99th=[ 7832] 00:26:48.021 bw ( KiB/s): min=13888, max=14224, per=25.02%, avg=14055.11, stdev=116.79, samples=9 00:26:48.021 iops : min= 1736, max= 1778, avg=1756.89, stdev=14.60, samples=9 00:26:48.021 lat (msec) : 2=0.67%, 4=3.47%, 10=95.86% 00:26:48.021 cpu : usr=94.62%, sys=4.84%, ctx=15, majf=0, minf=9 00:26:48.021 IO depths : 1=0.6%, 2=23.1%, 4=51.4%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 issued rwts: total=8789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=3903200: Mon Jul 15 23:53:22 2024 00:26:48.021 read: IOPS=1757, BW=13.7MiB/s (14.4MB/s)(68.7MiB/5003msec) 00:26:48.021 slat (nsec): min=7730, max=69805, avg=22344.79, stdev=9530.06 00:26:48.021 clat (usec): min=1043, max=8125, avg=4469.82, stdev=381.45 00:26:48.021 lat (usec): min=1063, max=8161, avg=4492.16, stdev=381.14 00:26:48.021 clat percentiles (usec): 00:26:48.021 | 1.00th=[ 3458], 5.00th=[ 4047], 10.00th=[ 4228], 20.00th=[ 4359], 00:26:48.021 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 00:26:48.021 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4817], 00:26:48.021 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 7635], 99.95th=[ 7701], 00:26:48.021 | 99.99th=[ 8094] 00:26:48.021 bw ( KiB/s): min=13952, max=14208, per=25.03%, avg=14058.80, stdev=68.72, samples=10 00:26:48.021 iops : min= 1744, max= 1776, avg=1757.30, stdev= 8.59, samples=10 00:26:48.021 lat (msec) : 2=0.15%, 4=4.25%, 10=95.60% 00:26:48.021 cpu : usr=94.74%, sys=4.68%, ctx=10, majf=0, minf=9 00:26:48.021 IO depths : 1=1.1%, 2=20.1%, 4=54.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.021 issued rwts: total=8793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:48.021 00:26:48.021 Run status group 0 (all jobs): 00:26:48.021 READ: bw=54.9MiB/s (57.5MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=275MiB (288MB), run=5001-5005msec 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.021 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.022 00:26:48.022 real 0m24.629s 00:26:48.022 user 4m35.844s 00:26:48.022 sys 0m6.097s 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.022 23:53:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:48.022 ************************************ 00:26:48.022 END TEST fio_dif_rand_params 00:26:48.022 ************************************ 00:26:48.281 23:53:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:48.281 23:53:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:48.281 23:53:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:48.281 23:53:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.281 23:53:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 ************************************ 00:26:48.281 START TEST fio_dif_digest 00:26:48.281 ************************************ 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 bdev_null0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 [2024-07-15 23:53:23.222505] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:48.281 { 00:26:48.281 "params": { 00:26:48.281 "name": "Nvme$subsystem", 00:26:48.281 "trtype": "$TEST_TRANSPORT", 00:26:48.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.281 "adrfam": "ipv4", 00:26:48.281 "trsvcid": "$NVMF_PORT", 00:26:48.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.281 "hdgst": ${hdgst:-false}, 00:26:48.281 "ddgst": ${ddgst:-false} 00:26:48.281 }, 00:26:48.281 "method": "bdev_nvme_attach_controller" 00:26:48.281 } 00:26:48.281 EOF 00:26:48.281 )") 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:48.281 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:48.282 "params": { 00:26:48.282 "name": "Nvme0", 00:26:48.282 "trtype": "tcp", 00:26:48.282 "traddr": "10.0.0.2", 00:26:48.282 "adrfam": "ipv4", 00:26:48.282 "trsvcid": "4420", 00:26:48.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:48.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:48.282 "hdgst": true, 00:26:48.282 "ddgst": true 00:26:48.282 }, 00:26:48.282 "method": "bdev_nvme_attach_controller" 00:26:48.282 }' 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:48.282 23:53:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:48.539 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:48.539 ... 00:26:48.539 fio-3.35 00:26:48.539 Starting 3 threads 00:26:48.539 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.731 00:27:00.731 filename0: (groupid=0, jobs=1): err= 0: pid=3904073: Mon Jul 15 23:53:34 2024 00:27:00.731 read: IOPS=198, BW=24.8MiB/s (26.1MB/s)(250MiB/10047msec) 00:27:00.731 slat (nsec): min=6074, max=37176, avg=13917.98, stdev=3457.25 00:27:00.731 clat (usec): min=12008, max=48316, avg=15053.45, stdev=1482.53 00:27:00.731 lat (usec): min=12021, max=48329, avg=15067.37, stdev=1482.56 00:27:00.731 clat percentiles (usec): 00:27:00.731 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:27:00.731 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:27:00.731 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:27:00.731 | 99.00th=[17695], 99.50th=[18482], 99.90th=[46924], 99.95th=[48497], 00:27:00.731 | 99.99th=[48497] 00:27:00.731 bw ( KiB/s): min=24832, max=26368, per=32.41%, avg=25536.00, stdev=379.48, samples=20 00:27:00.731 iops : min= 194, max= 206, avg=199.50, stdev= 2.96, samples=20 00:27:00.731 lat (msec) : 20=99.75%, 50=0.25% 00:27:00.731 cpu : usr=91.12%, sys=8.37%, ctx=30, majf=0, minf=51 00:27:00.731 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:00.731 filename0: (groupid=0, jobs=1): err= 0: pid=3904074: Mon Jul 15 23:53:34 2024 00:27:00.731 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10047msec) 00:27:00.731 slat (nsec): min=7924, max=39109, avg=13824.47, stdev=3225.98 00:27:00.731 clat (usec): min=10396, max=54912, avg=13851.88, stdev=1544.67 00:27:00.731 lat (usec): min=10424, max=54925, avg=13865.70, stdev=1544.70 00:27:00.731 clat percentiles (usec): 00:27:00.731 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12518], 20.00th=[13042], 00:27:00.731 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:27:00.731 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270], 00:27:00.731 | 99.00th=[16188], 99.50th=[16581], 99.90th=[20317], 99.95th=[52691], 00:27:00.731 | 99.99th=[54789] 00:27:00.731 bw ( KiB/s): min=26624, max=29184, per=35.22%, avg=27750.40, stdev=565.77, samples=20 00:27:00.731 iops : min= 208, max= 228, avg=216.80, stdev= 4.42, samples=20 00:27:00.731 lat (msec) : 20=99.82%, 50=0.09%, 100=0.09% 00:27:00.731 cpu : usr=90.26%, sys=9.11%, ctx=23, majf=0, minf=81 00:27:00.731 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:00.731 filename0: (groupid=0, jobs=1): err= 0: pid=3904075: Mon Jul 15 23:53:34 2024 00:27:00.731 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10045msec) 00:27:00.731 slat (nsec): min=6219, max=47263, avg=14087.07, stdev=3909.48 00:27:00.731 clat (usec): min=11627, max=51147, avg=14894.43, stdev=1509.72 00:27:00.731 lat (usec): min=11662, max=51162, avg=14908.51, stdev=1509.58 00:27:00.731 clat percentiles (usec): 00:27:00.731 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13698], 20.00th=[14091], 00:27:00.731 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:27:00.731 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:27:00.731 | 99.00th=[17433], 99.50th=[17695], 99.90th=[21627], 99.95th=[49021], 00:27:00.731 | 99.99th=[51119] 00:27:00.731 bw ( KiB/s): min=25088, max=26368, per=32.73%, avg=25794.50, stdev=376.53, samples=20 00:27:00.731 iops : min= 196, max= 206, avg=201.50, stdev= 2.96, samples=20 00:27:00.731 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:27:00.731 cpu : usr=90.78%, sys=8.60%, ctx=17, majf=0, minf=144 00:27:00.731 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.731 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.731 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:00.731 00:27:00.731 Run status group 0 (all jobs): 00:27:00.731 READ: bw=77.0MiB/s (80.7MB/s), 24.8MiB/s-27.0MiB/s (26.1MB/s-28.3MB/s), io=773MiB (811MB), run=10045-10047msec 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.731 00:27:00.731 real 0m11.237s 00:27:00.731 user 0m28.505s 00:27:00.731 sys 0m2.928s 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:00.731 23:53:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.731 ************************************ 00:27:00.731 END TEST fio_dif_digest 00:27:00.731 ************************************ 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:00.731 23:53:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:00.731 23:53:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.731 rmmod nvme_tcp 00:27:00.731 rmmod nvme_fabrics 00:27:00.731 rmmod nvme_keyring 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3897889 ']' 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3897889 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3897889 ']' 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3897889 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3897889 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3897889' 00:27:00.731 killing process with pid 3897889 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3897889 00:27:00.731 23:53:34 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3897889 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:00.731 23:53:34 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:00.731 Waiting for block devices as requested 00:27:00.731 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:00.991 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:00.991 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:00.991 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:01.251 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:01.251 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:01.251 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:01.251 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:01.509 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:01.509 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:01.768 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:01.768 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:01.768 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:02.029 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:02.029 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:02.029 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:02.029 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:02.288 23:53:37 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.288 23:53:37 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.288 23:53:37 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.289 23:53:37 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.289 23:53:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.289 23:53:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:02.289 23:53:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.195 23:53:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.195 00:27:04.195 real 1m7.236s 00:27:04.195 user 6m31.552s 00:27:04.195 sys 0m18.469s 00:27:04.195 23:53:39 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.195 23:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.195 ************************************ 00:27:04.195 END TEST nvmf_dif 00:27:04.195 ************************************ 00:27:04.195 23:53:39 -- common/autotest_common.sh@1142 -- # return 0 00:27:04.195 23:53:39 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:04.195 23:53:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.195 23:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.195 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:27:04.453 ************************************ 00:27:04.454 START TEST nvmf_abort_qd_sizes 00:27:04.454 ************************************ 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:04.454 * Looking for test storage... 00:27:04.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.454 23:53:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:06.357 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:06.357 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:06.357 Found net devices under 0000:09:00.0: cvl_0_0 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.357 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:06.358 Found net devices under 0000:09:00.1: cvl_0_1 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.358 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.617 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.617 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.617 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:27:06.617 00:27:06.618 --- 10.0.0.2 ping statistics --- 00:27:06.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.618 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:06.618 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:27:06.618 00:27:06.618 --- 10.0.0.1 ping statistics --- 00:27:06.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.618 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:06.618 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.618 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:06.618 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:06.618 23:53:41 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:08.028 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:08.028 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:08.028 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:08.965 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3908981 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3908981 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3908981 ']' 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.965 23:53:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.965 [2024-07-15 23:53:44.034191] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:08.965 [2024-07-15 23:53:44.034286] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.965 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.223 [2024-07-15 23:53:44.099370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.223 [2024-07-15 23:53:44.211427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.224 [2024-07-15 23:53:44.211479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.224 [2024-07-15 23:53:44.211492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.224 [2024-07-15 23:53:44.211503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.224 [2024-07-15 23:53:44.211512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.224 [2024-07-15 23:53:44.214975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.224 [2024-07-15 23:53:44.215042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.224 [2024-07-15 23:53:44.215107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.224 [2024-07-15 23:53:44.215110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.224 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.224 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:09.224 23:53:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.224 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.224 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.482 23:53:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:09.482 ************************************ 00:27:09.482 START TEST spdk_target_abort 00:27:09.482 ************************************ 00:27:09.482 23:53:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:09.482 23:53:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:09.482 23:53:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:27:09.482 23:53:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.482 23:53:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 spdk_targetn1 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 [2024-07-15 23:53:47.240617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.763 [2024-07-15 23:53:47.272875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:12.763 23:53:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.763 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.045 Initializing NVMe Controllers 00:27:16.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:16.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:16.045 Initialization complete. Launching workers. 00:27:16.045 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12174, failed: 0 00:27:16.045 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 10961 00:27:16.045 success 778, unsuccess 435, failed 0 00:27:16.045 23:53:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:16.045 23:53:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.045 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.337 Initializing NVMe Controllers 00:27:19.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:19.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:19.337 Initialization complete. Launching workers. 00:27:19.337 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8654, failed: 0 00:27:19.337 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7395 00:27:19.337 success 347, unsuccess 912, failed 0 00:27:19.337 23:53:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.337 23:53:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.337 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.624 Initializing NVMe Controllers 00:27:22.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:22.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:22.624 Initialization complete. Launching workers. 00:27:22.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31520, failed: 0 00:27:22.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 28769 00:27:22.624 success 533, unsuccess 2218, failed 0 00:27:22.624 23:53:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:22.624 23:53:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.624 23:53:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.624 23:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.624 23:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:22.624 23:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.624 23:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3908981 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3908981 ']' 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3908981 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3908981 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3908981' 00:27:23.191 killing process with pid 3908981 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3908981 00:27:23.191 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3908981 00:27:23.450 00:27:23.450 real 0m14.140s 00:27:23.450 user 0m53.350s 00:27:23.450 sys 0m2.647s 00:27:23.450 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.450 23:53:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.450 ************************************ 00:27:23.450 END TEST spdk_target_abort 00:27:23.450 ************************************ 00:27:23.450 23:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:23.450 23:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:23.450 23:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:23.450 23:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.450 23:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:23.708 ************************************ 00:27:23.708 START TEST kernel_target_abort 00:27:23.708 ************************************ 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:23.708 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:23.709 23:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:25.084 Waiting for block devices as requested 00:27:25.084 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:25.084 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:25.084 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:25.084 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:25.342 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:25.342 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:25.342 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:25.342 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:25.600 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:25.600 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:25.600 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:25.859 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:25.859 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:25.859 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:25.859 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:25.859 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:26.118 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:26.118 No valid GPT data, bailing 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:26.118 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:26.376 00:27:26.376 Discovery Log Number of Records 2, Generation counter 2 00:27:26.376 =====Discovery Log Entry 0====== 00:27:26.376 trtype: tcp 00:27:26.376 adrfam: ipv4 00:27:26.376 subtype: current discovery subsystem 00:27:26.376 treq: not specified, sq flow control disable supported 00:27:26.376 portid: 1 00:27:26.376 trsvcid: 4420 00:27:26.376 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:26.376 traddr: 10.0.0.1 00:27:26.376 eflags: none 00:27:26.376 sectype: none 00:27:26.376 =====Discovery Log Entry 1====== 00:27:26.376 trtype: tcp 00:27:26.376 adrfam: ipv4 00:27:26.376 subtype: nvme subsystem 00:27:26.376 treq: not specified, sq flow control disable supported 00:27:26.376 portid: 1 00:27:26.376 trsvcid: 4420 00:27:26.376 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:26.376 traddr: 10.0.0.1 00:27:26.376 eflags: none 00:27:26.376 sectype: none 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:26.376 23:54:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:26.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.660 Initializing NVMe Controllers 00:27:29.660 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:29.660 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:29.660 Initialization complete. Launching workers. 00:27:29.660 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48884, failed: 0 00:27:29.660 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48884, failed to submit 0 00:27:29.660 success 0, unsuccess 48884, failed 0 00:27:29.660 23:54:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:29.660 23:54:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:29.660 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.973 Initializing NVMe Controllers 00:27:32.973 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:32.973 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:32.973 Initialization complete. Launching workers. 00:27:32.973 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92568, failed: 0 00:27:32.973 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23286, failed to submit 69282 00:27:32.973 success 0, unsuccess 23286, failed 0 00:27:32.973 23:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:32.973 23:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:32.973 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.255 Initializing NVMe Controllers 00:27:36.255 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:36.255 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:36.255 Initialization complete. Launching workers. 00:27:36.255 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90197, failed: 0 00:27:36.255 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22542, failed to submit 67655 00:27:36.255 success 0, unsuccess 22542, failed 0 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:36.255 23:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:36.825 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:36.825 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:36.825 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:36.825 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:36.825 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:36.825 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:37.083 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:37.083 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:37.083 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:38.018 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:38.018 00:27:38.018 real 0m14.508s 00:27:38.018 user 0m6.251s 00:27:38.018 sys 0m3.391s 00:27:38.018 23:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.018 23:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:38.018 ************************************ 00:27:38.018 END TEST kernel_target_abort 00:27:38.018 ************************************ 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:38.018 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:38.018 rmmod nvme_tcp 00:27:38.285 rmmod nvme_fabrics 00:27:38.285 rmmod nvme_keyring 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3908981 ']' 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3908981 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3908981 ']' 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3908981 00:27:38.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3908981) - No such process 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3908981 is not found' 00:27:38.285 Process with pid 3908981 is not found 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:38.285 23:54:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:39.662 Waiting for block devices as requested 00:27:39.662 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:39.662 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:39.662 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:39.662 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:39.662 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:39.662 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:39.921 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:39.921 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:39.921 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:40.181 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:40.181 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:40.181 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:40.441 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:40.441 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:40.441 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:40.700 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:40.700 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:40.700 23:54:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.232 23:54:17 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.232 00:27:43.232 real 0m38.501s 00:27:43.232 user 1m1.735s 00:27:43.232 sys 0m9.724s 00:27:43.232 23:54:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.232 23:54:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:43.232 ************************************ 00:27:43.232 END TEST nvmf_abort_qd_sizes 00:27:43.232 ************************************ 00:27:43.232 23:54:17 -- common/autotest_common.sh@1142 -- # return 0 00:27:43.232 23:54:17 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:43.232 23:54:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.232 23:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.232 23:54:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.232 ************************************ 00:27:43.232 START TEST keyring_file 00:27:43.232 ************************************ 00:27:43.232 23:54:17 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:43.232 * Looking for test storage... 00:27:43.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:43.232 23:54:17 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:43.232 23:54:17 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.232 23:54:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.233 23:54:17 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.233 23:54:17 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.233 23:54:17 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.233 23:54:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.233 23:54:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.233 23:54:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.233 23:54:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:43.233 23:54:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tauTbviIrL 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tauTbviIrL 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tauTbviIrL 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tauTbviIrL 00:27:43.233 23:54:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rGIKYa9YPG 00:27:43.233 23:54:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:43.233 23:54:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:43.233 23:54:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rGIKYa9YPG 00:27:43.233 23:54:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rGIKYa9YPG 00:27:43.233 23:54:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rGIKYa9YPG 00:27:43.233 23:54:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=3915367 00:27:43.233 23:54:18 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:43.233 23:54:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3915367 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3915367 ']' 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.233 23:54:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:43.233 [2024-07-15 23:54:18.073006] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:43.233 [2024-07-15 23:54:18.073106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915367 ] 00:27:43.233 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.233 [2024-07-15 23:54:18.129413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.233 [2024-07-15 23:54:18.237137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:44.171 23:54:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:44.171 [2024-07-15 23:54:19.010173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.171 null0 00:27:44.171 [2024-07-15 23:54:19.042218] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:44.171 [2024-07-15 23:54:19.042534] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:44.171 [2024-07-15 23:54:19.050222] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.171 23:54:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:44.171 [2024-07-15 23:54:19.062276] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:44.171 request: 00:27:44.171 { 00:27:44.171 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.171 "secure_channel": false, 00:27:44.171 "listen_address": { 00:27:44.171 "trtype": "tcp", 00:27:44.171 "traddr": "127.0.0.1", 00:27:44.171 "trsvcid": "4420" 00:27:44.171 }, 00:27:44.171 "method": "nvmf_subsystem_add_listener", 00:27:44.171 "req_id": 1 00:27:44.171 } 00:27:44.171 Got JSON-RPC error response 00:27:44.171 response: 00:27:44.171 { 00:27:44.171 "code": -32602, 00:27:44.171 "message": "Invalid parameters" 00:27:44.171 } 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:44.171 23:54:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=3915503 00:27:44.171 23:54:19 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:44.171 23:54:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3915503 /var/tmp/bperf.sock 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3915503 ']' 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.171 23:54:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:44.171 [2024-07-15 23:54:19.107462] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:44.171 [2024-07-15 23:54:19.107525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915503 ] 00:27:44.171 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.171 [2024-07-15 23:54:19.163100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.171 [2024-07-15 23:54:19.268052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.429 23:54:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.429 23:54:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:44.429 23:54:19 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:44.429 23:54:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:44.687 23:54:19 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rGIKYa9YPG 00:27:44.687 23:54:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rGIKYa9YPG 00:27:44.945 23:54:19 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:44.945 23:54:19 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:44.945 23:54:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.945 23:54:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.945 23:54:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:45.203 23:54:20 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tauTbviIrL == \/\t\m\p\/\t\m\p\.\t\a\u\T\b\v\i\I\r\L ]] 00:27:45.203 23:54:20 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:45.203 23:54:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:45.203 23:54:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.203 23:54:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.203 23:54:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:45.461 23:54:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rGIKYa9YPG == \/\t\m\p\/\t\m\p\.\r\G\I\K\Y\a\9\Y\P\G ]] 00:27:45.461 23:54:20 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:45.461 23:54:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:45.461 23:54:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.461 23:54:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.461 23:54:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.461 23:54:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:45.721 23:54:20 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:45.721 23:54:20 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:45.721 23:54:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:45.721 23:54:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.721 23:54:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.721 23:54:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:45.721 23:54:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.980 23:54:20 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:45.980 23:54:20 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.980 23:54:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.980 [2024-07-15 23:54:21.103855] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:46.237 nvme0n1 00:27:46.237 23:54:21 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:46.237 23:54:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:46.237 23:54:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.237 23:54:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.237 23:54:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:46.237 23:54:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.494 23:54:21 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:46.494 23:54:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:46.494 23:54:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:46.494 23:54:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.494 23:54:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.494 23:54:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:46.494 23:54:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.751 23:54:21 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:46.751 23:54:21 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.751 Running I/O for 1 seconds... 00:27:47.683 00:27:47.683 Latency(us) 00:27:47.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.683 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:47.683 nvme0n1 : 1.01 8925.37 34.86 0.00 0.00 14273.35 4271.98 20874.43 00:27:47.683 =================================================================================================================== 00:27:47.683 Total : 8925.37 34.86 0.00 0.00 14273.35 4271.98 20874.43 00:27:47.683 0 00:27:47.683 23:54:22 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:47.683 23:54:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:47.941 23:54:23 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:47.941 23:54:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:47.941 23:54:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.941 23:54:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.942 23:54:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:47.942 23:54:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.200 23:54:23 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:48.200 23:54:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:48.200 23:54:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:48.200 23:54:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:48.200 23:54:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:48.200 23:54:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:48.200 23:54:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.458 23:54:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:48.458 23:54:23 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.458 23:54:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:48.458 23:54:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:48.715 [2024-07-15 23:54:23.781423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:48.715 [2024-07-15 23:54:23.782047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a9a0 (107): Transport endpoint is not connected 00:27:48.715 [2024-07-15 23:54:23.783038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a9a0 (9): Bad file descriptor 00:27:48.715 [2024-07-15 23:54:23.784038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:48.715 [2024-07-15 23:54:23.784057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:48.715 [2024-07-15 23:54:23.784071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:48.715 request: 00:27:48.715 { 00:27:48.715 "name": "nvme0", 00:27:48.715 "trtype": "tcp", 00:27:48.715 "traddr": "127.0.0.1", 00:27:48.715 "adrfam": "ipv4", 00:27:48.715 "trsvcid": "4420", 00:27:48.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:48.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:48.715 "prchk_reftag": false, 00:27:48.715 "prchk_guard": false, 00:27:48.715 "hdgst": false, 00:27:48.715 "ddgst": false, 00:27:48.715 "psk": "key1", 00:27:48.715 "method": "bdev_nvme_attach_controller", 00:27:48.715 "req_id": 1 00:27:48.715 } 00:27:48.715 Got JSON-RPC error response 00:27:48.715 response: 00:27:48.715 { 00:27:48.715 "code": -5, 00:27:48.715 "message": "Input/output error" 00:27:48.715 } 00:27:48.715 23:54:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:48.715 23:54:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.715 23:54:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.715 23:54:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.715 23:54:23 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:48.715 23:54:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:48.715 23:54:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:48.715 23:54:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:48.715 23:54:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:48.715 23:54:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.974 23:54:24 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:48.974 23:54:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:48.974 23:54:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:48.974 23:54:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:48.974 23:54:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:48.974 23:54:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.974 23:54:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:49.235 23:54:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:49.235 23:54:24 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:49.235 23:54:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:49.492 23:54:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:49.492 23:54:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:49.750 23:54:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:49.750 23:54:24 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:49.750 23:54:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.037 23:54:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:50.037 23:54:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tauTbviIrL 00:27:50.037 23:54:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:50.037 23:54:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.037 23:54:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.295 [2024-07-15 23:54:25.266077] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tauTbviIrL': 0100660 00:27:50.295 [2024-07-15 23:54:25.266110] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:50.295 request: 00:27:50.295 { 00:27:50.295 "name": "key0", 00:27:50.295 "path": "/tmp/tmp.tauTbviIrL", 00:27:50.295 "method": "keyring_file_add_key", 00:27:50.295 "req_id": 1 00:27:50.295 } 00:27:50.295 Got JSON-RPC error response 00:27:50.295 response: 00:27:50.295 { 00:27:50.295 "code": -1, 00:27:50.295 "message": "Operation not permitted" 00:27:50.295 } 00:27:50.295 23:54:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:50.295 23:54:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:50.295 23:54:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:50.295 23:54:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:50.295 23:54:25 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tauTbviIrL 00:27:50.295 23:54:25 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.295 23:54:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tauTbviIrL 00:27:50.553 23:54:25 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tauTbviIrL 00:27:50.553 23:54:25 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:50.553 23:54:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:50.553 23:54:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:50.553 23:54:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:50.553 23:54:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.553 23:54:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:50.811 23:54:25 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:50.812 23:54:25 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:50.812 23:54:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:50.812 23:54:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:51.069 [2024-07-15 23:54:26.012123] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tauTbviIrL': No such file or directory 00:27:51.069 [2024-07-15 23:54:26.012156] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:51.069 [2024-07-15 23:54:26.012198] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:51.069 [2024-07-15 23:54:26.012210] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:51.069 [2024-07-15 23:54:26.012221] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:51.069 request: 00:27:51.069 { 00:27:51.069 "name": "nvme0", 00:27:51.069 "trtype": "tcp", 00:27:51.069 "traddr": "127.0.0.1", 00:27:51.069 "adrfam": "ipv4", 00:27:51.069 "trsvcid": "4420", 00:27:51.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.069 "prchk_reftag": false, 00:27:51.069 "prchk_guard": false, 00:27:51.069 "hdgst": false, 00:27:51.069 "ddgst": false, 00:27:51.069 "psk": "key0", 00:27:51.069 "method": "bdev_nvme_attach_controller", 00:27:51.069 "req_id": 1 00:27:51.069 } 00:27:51.069 Got JSON-RPC error response 00:27:51.069 response: 00:27:51.069 { 00:27:51.069 "code": -19, 00:27:51.069 "message": "No such device" 00:27:51.069 } 00:27:51.069 23:54:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:51.069 23:54:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.069 23:54:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.069 23:54:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.069 23:54:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:51.069 23:54:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:51.326 23:54:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UJZLqu9sBG 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:51.326 23:54:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UJZLqu9sBG 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UJZLqu9sBG 00:27:51.326 23:54:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.UJZLqu9sBG 00:27:51.326 23:54:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UJZLqu9sBG 00:27:51.326 23:54:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UJZLqu9sBG 00:27:51.584 23:54:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:51.584 23:54:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:51.842 nvme0n1 00:27:51.842 23:54:26 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:51.842 23:54:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:51.842 23:54:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:51.842 23:54:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:51.842 23:54:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.842 23:54:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.100 23:54:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:52.100 23:54:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:52.100 23:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:52.358 23:54:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:52.358 23:54:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:52.358 23:54:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.358 23:54:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.358 23:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.616 23:54:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:52.616 23:54:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:52.616 23:54:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:52.616 23:54:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:52.616 23:54:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.616 23:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.616 23:54:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.874 23:54:27 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:52.874 23:54:27 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:52.874 23:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:53.131 23:54:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:53.131 23:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.131 23:54:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:53.389 23:54:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:53.389 23:54:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UJZLqu9sBG 00:27:53.389 23:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UJZLqu9sBG 00:27:53.647 23:54:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rGIKYa9YPG 00:27:53.647 23:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rGIKYa9YPG 00:27:53.904 23:54:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:53.904 23:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:54.161 nvme0n1 00:27:54.161 23:54:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:54.161 23:54:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:54.419 23:54:29 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:54.419 "subsystems": [ 00:27:54.419 { 00:27:54.419 "subsystem": "keyring", 00:27:54.419 "config": [ 00:27:54.419 { 00:27:54.419 "method": "keyring_file_add_key", 00:27:54.419 "params": { 00:27:54.419 "name": "key0", 00:27:54.419 "path": "/tmp/tmp.UJZLqu9sBG" 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "keyring_file_add_key", 00:27:54.419 "params": { 00:27:54.419 "name": "key1", 00:27:54.419 "path": "/tmp/tmp.rGIKYa9YPG" 00:27:54.419 } 00:27:54.419 } 00:27:54.419 ] 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "subsystem": "iobuf", 00:27:54.419 "config": [ 00:27:54.419 { 00:27:54.419 "method": "iobuf_set_options", 00:27:54.419 "params": { 00:27:54.419 "small_pool_count": 8192, 00:27:54.419 "large_pool_count": 1024, 00:27:54.419 "small_bufsize": 8192, 00:27:54.419 "large_bufsize": 135168 00:27:54.419 } 00:27:54.419 } 00:27:54.419 ] 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "subsystem": "sock", 00:27:54.419 "config": [ 00:27:54.419 { 00:27:54.419 "method": "sock_set_default_impl", 00:27:54.419 "params": { 00:27:54.419 "impl_name": "posix" 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "sock_impl_set_options", 00:27:54.419 "params": { 00:27:54.419 "impl_name": "ssl", 00:27:54.419 "recv_buf_size": 4096, 00:27:54.419 "send_buf_size": 4096, 00:27:54.419 "enable_recv_pipe": true, 00:27:54.419 "enable_quickack": false, 00:27:54.419 "enable_placement_id": 0, 00:27:54.419 "enable_zerocopy_send_server": true, 00:27:54.419 "enable_zerocopy_send_client": false, 00:27:54.419 "zerocopy_threshold": 0, 00:27:54.419 "tls_version": 0, 00:27:54.419 "enable_ktls": false 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "sock_impl_set_options", 00:27:54.419 "params": { 00:27:54.419 "impl_name": "posix", 00:27:54.419 "recv_buf_size": 2097152, 00:27:54.419 "send_buf_size": 2097152, 00:27:54.419 "enable_recv_pipe": true, 00:27:54.419 "enable_quickack": false, 00:27:54.419 "enable_placement_id": 0, 00:27:54.419 "enable_zerocopy_send_server": true, 00:27:54.419 "enable_zerocopy_send_client": false, 00:27:54.419 "zerocopy_threshold": 0, 00:27:54.419 "tls_version": 0, 00:27:54.419 "enable_ktls": false 00:27:54.419 } 00:27:54.419 } 00:27:54.419 ] 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "subsystem": "vmd", 00:27:54.419 "config": [] 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "subsystem": "accel", 00:27:54.419 "config": [ 00:27:54.419 { 00:27:54.419 "method": "accel_set_options", 00:27:54.419 "params": { 00:27:54.419 "small_cache_size": 128, 00:27:54.419 "large_cache_size": 16, 00:27:54.419 "task_count": 2048, 00:27:54.419 "sequence_count": 2048, 00:27:54.419 "buf_count": 2048 00:27:54.419 } 00:27:54.419 } 00:27:54.419 ] 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "subsystem": "bdev", 00:27:54.419 "config": [ 00:27:54.419 { 00:27:54.419 "method": "bdev_set_options", 00:27:54.419 "params": { 00:27:54.419 "bdev_io_pool_size": 65535, 00:27:54.419 "bdev_io_cache_size": 256, 00:27:54.419 "bdev_auto_examine": true, 00:27:54.419 "iobuf_small_cache_size": 128, 00:27:54.419 "iobuf_large_cache_size": 16 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "bdev_raid_set_options", 00:27:54.419 "params": { 00:27:54.419 "process_window_size_kb": 1024 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "bdev_iscsi_set_options", 00:27:54.419 "params": { 00:27:54.419 "timeout_sec": 30 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "bdev_nvme_set_options", 00:27:54.419 "params": { 00:27:54.419 "action_on_timeout": "none", 00:27:54.419 "timeout_us": 0, 00:27:54.419 "timeout_admin_us": 0, 00:27:54.419 "keep_alive_timeout_ms": 10000, 00:27:54.419 "arbitration_burst": 0, 00:27:54.419 "low_priority_weight": 0, 00:27:54.419 "medium_priority_weight": 0, 00:27:54.419 "high_priority_weight": 0, 00:27:54.419 "nvme_adminq_poll_period_us": 10000, 00:27:54.419 "nvme_ioq_poll_period_us": 0, 00:27:54.419 "io_queue_requests": 512, 00:27:54.419 "delay_cmd_submit": true, 00:27:54.419 "transport_retry_count": 4, 00:27:54.419 "bdev_retry_count": 3, 00:27:54.419 "transport_ack_timeout": 0, 00:27:54.419 "ctrlr_loss_timeout_sec": 0, 00:27:54.419 "reconnect_delay_sec": 0, 00:27:54.419 "fast_io_fail_timeout_sec": 0, 00:27:54.419 "disable_auto_failback": false, 00:27:54.419 "generate_uuids": false, 00:27:54.419 "transport_tos": 0, 00:27:54.419 "nvme_error_stat": false, 00:27:54.419 "rdma_srq_size": 0, 00:27:54.419 "io_path_stat": false, 00:27:54.419 "allow_accel_sequence": false, 00:27:54.419 "rdma_max_cq_size": 0, 00:27:54.419 "rdma_cm_event_timeout_ms": 0, 00:27:54.419 "dhchap_digests": [ 00:27:54.419 "sha256", 00:27:54.419 "sha384", 00:27:54.419 "sha512" 00:27:54.419 ], 00:27:54.419 "dhchap_dhgroups": [ 00:27:54.419 "null", 00:27:54.419 "ffdhe2048", 00:27:54.419 "ffdhe3072", 00:27:54.419 "ffdhe4096", 00:27:54.419 "ffdhe6144", 00:27:54.419 "ffdhe8192" 00:27:54.419 ] 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.419 "method": "bdev_nvme_attach_controller", 00:27:54.419 "params": { 00:27:54.419 "name": "nvme0", 00:27:54.419 "trtype": "TCP", 00:27:54.419 "adrfam": "IPv4", 00:27:54.419 "traddr": "127.0.0.1", 00:27:54.419 "trsvcid": "4420", 00:27:54.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.419 "prchk_reftag": false, 00:27:54.419 "prchk_guard": false, 00:27:54.419 "ctrlr_loss_timeout_sec": 0, 00:27:54.419 "reconnect_delay_sec": 0, 00:27:54.419 "fast_io_fail_timeout_sec": 0, 00:27:54.419 "psk": "key0", 00:27:54.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.419 "hdgst": false, 00:27:54.419 "ddgst": false 00:27:54.419 } 00:27:54.419 }, 00:27:54.419 { 00:27:54.420 "method": "bdev_nvme_set_hotplug", 00:27:54.420 "params": { 00:27:54.420 "period_us": 100000, 00:27:54.420 "enable": false 00:27:54.420 } 00:27:54.420 }, 00:27:54.420 { 00:27:54.420 "method": "bdev_wait_for_examine" 00:27:54.420 } 00:27:54.420 ] 00:27:54.420 }, 00:27:54.420 { 00:27:54.420 "subsystem": "nbd", 00:27:54.420 "config": [] 00:27:54.420 } 00:27:54.420 ] 00:27:54.420 }' 00:27:54.420 23:54:29 keyring_file -- keyring/file.sh@114 -- # killprocess 3915503 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3915503 ']' 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3915503 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3915503 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3915503' 00:27:54.420 killing process with pid 3915503 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@967 -- # kill 3915503 00:27:54.420 Received shutdown signal, test time was about 1.000000 seconds 00:27:54.420 00:27:54.420 Latency(us) 00:27:54.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.420 =================================================================================================================== 00:27:54.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.420 23:54:29 keyring_file -- common/autotest_common.sh@972 -- # wait 3915503 00:27:54.678 23:54:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=3916838 00:27:54.678 23:54:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3916838 /var/tmp/bperf.sock 00:27:54.678 23:54:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3916838 ']' 00:27:54.678 23:54:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.678 23:54:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:54.678 23:54:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.678 23:54:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.678 23:54:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:54.678 "subsystems": [ 00:27:54.678 { 00:27:54.678 "subsystem": "keyring", 00:27:54.678 "config": [ 00:27:54.678 { 00:27:54.678 "method": "keyring_file_add_key", 00:27:54.678 "params": { 00:27:54.678 "name": "key0", 00:27:54.678 "path": "/tmp/tmp.UJZLqu9sBG" 00:27:54.678 } 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "method": "keyring_file_add_key", 00:27:54.678 "params": { 00:27:54.678 "name": "key1", 00:27:54.678 "path": "/tmp/tmp.rGIKYa9YPG" 00:27:54.678 } 00:27:54.678 } 00:27:54.678 ] 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "subsystem": "iobuf", 00:27:54.678 "config": [ 00:27:54.678 { 00:27:54.678 "method": "iobuf_set_options", 00:27:54.678 "params": { 00:27:54.678 "small_pool_count": 8192, 00:27:54.678 "large_pool_count": 1024, 00:27:54.678 "small_bufsize": 8192, 00:27:54.678 "large_bufsize": 135168 00:27:54.678 } 00:27:54.678 } 00:27:54.678 ] 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "subsystem": "sock", 00:27:54.678 "config": [ 00:27:54.678 { 00:27:54.678 "method": "sock_set_default_impl", 00:27:54.678 "params": { 00:27:54.678 "impl_name": "posix" 00:27:54.678 } 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "method": "sock_impl_set_options", 00:27:54.678 "params": { 00:27:54.678 "impl_name": "ssl", 00:27:54.678 "recv_buf_size": 4096, 00:27:54.678 "send_buf_size": 4096, 00:27:54.678 "enable_recv_pipe": true, 00:27:54.678 "enable_quickack": false, 00:27:54.678 "enable_placement_id": 0, 00:27:54.678 "enable_zerocopy_send_server": true, 00:27:54.678 "enable_zerocopy_send_client": false, 00:27:54.678 "zerocopy_threshold": 0, 00:27:54.678 "tls_version": 0, 00:27:54.678 "enable_ktls": false 00:27:54.678 } 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "method": "sock_impl_set_options", 00:27:54.678 "params": { 00:27:54.678 "impl_name": "posix", 00:27:54.678 "recv_buf_size": 2097152, 00:27:54.678 "send_buf_size": 2097152, 00:27:54.678 "enable_recv_pipe": true, 00:27:54.678 "enable_quickack": false, 00:27:54.678 "enable_placement_id": 0, 00:27:54.678 "enable_zerocopy_send_server": true, 00:27:54.678 "enable_zerocopy_send_client": false, 00:27:54.678 "zerocopy_threshold": 0, 00:27:54.678 "tls_version": 0, 00:27:54.678 "enable_ktls": false 00:27:54.678 } 00:27:54.678 } 00:27:54.678 ] 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "subsystem": "vmd", 00:27:54.678 "config": [] 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "subsystem": "accel", 00:27:54.678 "config": [ 00:27:54.678 { 00:27:54.678 "method": "accel_set_options", 00:27:54.678 "params": { 00:27:54.678 "small_cache_size": 128, 00:27:54.678 "large_cache_size": 16, 00:27:54.678 "task_count": 2048, 00:27:54.678 "sequence_count": 2048, 00:27:54.678 "buf_count": 2048 00:27:54.678 } 00:27:54.678 } 00:27:54.678 ] 00:27:54.678 }, 00:27:54.678 { 00:27:54.678 "subsystem": "bdev", 00:27:54.678 "config": [ 00:27:54.678 { 00:27:54.678 "method": "bdev_set_options", 00:27:54.678 "params": { 00:27:54.679 "bdev_io_pool_size": 65535, 00:27:54.679 "bdev_io_cache_size": 256, 00:27:54.679 "bdev_auto_examine": true, 00:27:54.679 "iobuf_small_cache_size": 128, 00:27:54.679 "iobuf_large_cache_size": 16 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_raid_set_options", 00:27:54.679 "params": { 00:27:54.679 "process_window_size_kb": 1024 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_iscsi_set_options", 00:27:54.679 "params": { 00:27:54.679 "timeout_sec": 30 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_nvme_set_options", 00:27:54.679 "params": { 00:27:54.679 "action_on_timeout": "none", 00:27:54.679 "timeout_us": 0, 00:27:54.679 "timeout_admin_us": 0, 00:27:54.679 "keep_alive_timeout_ms": 10000, 00:27:54.679 "arbitration_burst": 0, 00:27:54.679 "low_priority_weight": 0, 00:27:54.679 "medium_priority_weight": 0, 00:27:54.679 "high_priority_weight": 0, 00:27:54.679 "nvme_adminq_poll_period_us": 10000, 00:27:54.679 "nvme_ioq_poll_period_us": 0, 00:27:54.679 "io_queue_requests": 512, 00:27:54.679 "delay_cmd_submit": true, 00:27:54.679 "transport_retry_count": 4, 00:27:54.679 "bdev_retry_count": 3, 00:27:54.679 "transport_ack_timeout": 0, 00:27:54.679 "ctrlr_loss_timeout_sec": 0, 00:27:54.679 "reconnect_delay_sec": 0, 00:27:54.679 "fast_io_fail_timeout_sec": 0, 00:27:54.679 "disable_auto_failback": false, 00:27:54.679 "generate_uuids": false, 00:27:54.679 "transport_tos": 0, 00:27:54.679 "nvme_error_stat": false, 00:27:54.679 "rdma_srq_size": 0, 00:27:54.679 "io_path_stat": false, 00:27:54.679 "allow_accel_sequence": false, 00:27:54.679 "rdma_max_cq_size": 0, 00:27:54.679 "rdma_cm_event_timeout_ms": 0, 00:27:54.679 "dhchap_digests": [ 00:27:54.679 "sha256", 00:27:54.679 "sha384", 00:27:54.679 "sha512" 00:27:54.679 ], 00:27:54.679 "dhchap_dhgroups": [ 00:27:54.679 "null", 00:27:54.679 "ffdhe2048", 00:27:54.679 "ffdhe3072", 00:27:54.679 "ffdhe4096", 00:27:54.679 "ffdhe6144", 00:27:54.679 "ffdhe8192" 00:27:54.679 ] 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_nvme_attach_controller", 00:27:54.679 "params": { 00:27:54.679 "name": "nvme0", 00:27:54.679 "trtype": "TCP", 00:27:54.679 "adrfam": "IPv4", 00:27:54.679 "traddr": "127.0.0.1", 00:27:54.679 "trsvcid": "4420", 00:27:54.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.679 "prchk_reftag": false, 00:27:54.679 "prchk_guard": false, 00:27:54.679 "ctrlr_loss_timeout_sec": 0, 00:27:54.679 "reconnect_delay_sec": 0, 00:27:54.679 "fast_io_fail_timeout_sec": 0, 00:27:54.679 "psk": "key0", 00:27:54.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.679 "hdgst": false, 00:27:54.679 "ddgst": false 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_nvme_set_hotplug", 00:27:54.679 "params": { 00:27:54.679 "period_us": 100000, 00:27:54.679 "enable": false 00:27:54.679 } 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "method": "bdev_wait_for_examine" 00:27:54.679 } 00:27:54.679 ] 00:27:54.679 }, 00:27:54.679 { 00:27:54.679 "subsystem": "nbd", 00:27:54.679 "config": [] 00:27:54.679 } 00:27:54.679 ] 00:27:54.679 }' 00:27:54.679 23:54:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.679 23:54:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:54.937 [2024-07-15 23:54:29.815054] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:54.937 [2024-07-15 23:54:29.815145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916838 ] 00:27:54.937 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.937 [2024-07-15 23:54:29.872335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.937 [2024-07-15 23:54:29.983380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.195 [2024-07-15 23:54:30.165786] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:55.761 23:54:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.761 23:54:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:55.761 23:54:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:55.761 23:54:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.761 23:54:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:56.019 23:54:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:56.019 23:54:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:56.019 23:54:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:56.019 23:54:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.019 23:54:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.019 23:54:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:56.019 23:54:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.276 23:54:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:56.276 23:54:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:56.276 23:54:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:56.276 23:54:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.276 23:54:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.276 23:54:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.276 23:54:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:56.534 23:54:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:56.534 23:54:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:56.534 23:54:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:56.534 23:54:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:56.792 23:54:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:56.792 23:54:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:56.792 23:54:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UJZLqu9sBG /tmp/tmp.rGIKYa9YPG 00:27:56.792 23:54:31 keyring_file -- keyring/file.sh@20 -- # killprocess 3916838 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3916838 ']' 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3916838 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3916838 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3916838' 00:27:56.792 killing process with pid 3916838 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@967 -- # kill 3916838 00:27:56.792 Received shutdown signal, test time was about 1.000000 seconds 00:27:56.792 00:27:56.792 Latency(us) 00:27:56.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.792 =================================================================================================================== 00:27:56.792 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:56.792 23:54:31 keyring_file -- common/autotest_common.sh@972 -- # wait 3916838 00:27:57.050 23:54:31 keyring_file -- keyring/file.sh@21 -- # killprocess 3915367 00:27:57.050 23:54:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3915367 ']' 00:27:57.050 23:54:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3915367 00:27:57.050 23:54:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:57.050 23:54:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.050 23:54:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3915367 00:27:57.050 23:54:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.050 23:54:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.050 23:54:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3915367' 00:27:57.050 killing process with pid 3915367 00:27:57.050 23:54:32 keyring_file -- common/autotest_common.sh@967 -- # kill 3915367 00:27:57.050 [2024-07-15 23:54:32.021302] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:57.050 23:54:32 keyring_file -- common/autotest_common.sh@972 -- # wait 3915367 00:27:57.308 00:27:57.308 real 0m14.546s 00:27:57.308 user 0m35.677s 00:27:57.308 sys 0m3.290s 00:27:57.308 23:54:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.308 23:54:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:57.308 ************************************ 00:27:57.309 END TEST keyring_file 00:27:57.309 ************************************ 00:27:57.568 23:54:32 -- common/autotest_common.sh@1142 -- # return 0 00:27:57.568 23:54:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:57.568 23:54:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:57.568 23:54:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:57.568 23:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.568 23:54:32 -- common/autotest_common.sh@10 -- # set +x 00:27:57.568 ************************************ 00:27:57.568 START TEST keyring_linux 00:27:57.568 ************************************ 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:57.568 * Looking for test storage... 00:27:57.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.568 23:54:32 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.568 23:54:32 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.568 23:54:32 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.568 23:54:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.568 23:54:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.568 23:54:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.568 23:54:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:57.568 23:54:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:57.568 /tmp/:spdk-test:key0 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:57.568 23:54:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:57.568 23:54:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:57.568 /tmp/:spdk-test:key1 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3917319 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:57.568 23:54:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3917319 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3917319 ']' 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.568 23:54:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:57.568 [2024-07-15 23:54:32.656712] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:57.568 [2024-07-15 23:54:32.656813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917319 ] 00:27:57.568 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.827 [2024-07-15 23:54:32.714434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.827 [2024-07-15 23:54:32.815329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:58.085 [2024-07-15 23:54:33.059741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.085 null0 00:27:58.085 [2024-07-15 23:54:33.091794] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:58.085 [2024-07-15 23:54:33.092275] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:58.085 92596351 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:58.085 1053001723 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3917334 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:58.085 23:54:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3917334 /var/tmp/bperf.sock 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3917334 ']' 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.085 23:54:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:58.085 [2024-07-15 23:54:33.154680] Starting SPDK v24.09-pre git sha1 1053f1b13 / DPDK 24.03.0 initialization... 00:27:58.086 [2024-07-15 23:54:33.154744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917334 ] 00:27:58.086 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.345 [2024-07-15 23:54:33.210671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.345 [2024-07-15 23:54:33.315537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.345 23:54:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.345 23:54:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:58.345 23:54:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:58.345 23:54:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:58.602 23:54:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:58.602 23:54:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:58.858 23:54:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:58.858 23:54:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:59.115 [2024-07-15 23:54:34.164034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:59.115 nvme0n1 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:59.373 23:54:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:59.373 23:54:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:59.373 23:54:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:59.373 23:54:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.373 23:54:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:59.629 23:54:34 keyring_linux -- keyring/linux.sh@25 -- # sn=92596351 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 92596351 == \9\2\5\9\6\3\5\1 ]] 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 92596351 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:59.885 23:54:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.885 Running I/O for 1 seconds... 00:28:00.815 00:28:00.815 Latency(us) 00:28:00.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:00.815 nvme0n1 : 1.01 9084.57 35.49 0.00 0.00 13989.95 8932.31 22913.33 00:28:00.815 =================================================================================================================== 00:28:00.815 Total : 9084.57 35.49 0.00 0.00 13989.95 8932.31 22913.33 00:28:00.815 0 00:28:00.815 23:54:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:00.815 23:54:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:01.071 23:54:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:01.071 23:54:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:01.071 23:54:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:01.071 23:54:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:01.071 23:54:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.071 23:54:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:01.329 23:54:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:01.329 23:54:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:01.329 23:54:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:01.329 23:54:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.329 23:54:36 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:01.329 23:54:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:01.586 [2024-07-15 23:54:36.617035] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:01.586 [2024-07-15 23:54:36.617657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16be3f0 (107): Transport endpoint is not connected 00:28:01.586 [2024-07-15 23:54:36.618651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16be3f0 (9): Bad file descriptor 00:28:01.586 [2024-07-15 23:54:36.619650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:01.586 [2024-07-15 23:54:36.619670] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:01.586 [2024-07-15 23:54:36.619682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:01.586 request: 00:28:01.586 { 00:28:01.586 "name": "nvme0", 00:28:01.586 "trtype": "tcp", 00:28:01.586 "traddr": "127.0.0.1", 00:28:01.586 "adrfam": "ipv4", 00:28:01.586 "trsvcid": "4420", 00:28:01.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.586 "prchk_reftag": false, 00:28:01.586 "prchk_guard": false, 00:28:01.586 "hdgst": false, 00:28:01.586 "ddgst": false, 00:28:01.586 "psk": ":spdk-test:key1", 00:28:01.586 "method": "bdev_nvme_attach_controller", 00:28:01.586 "req_id": 1 00:28:01.586 } 00:28:01.586 Got JSON-RPC error response 00:28:01.586 response: 00:28:01.586 { 00:28:01.586 "code": -5, 00:28:01.586 "message": "Input/output error" 00:28:01.586 } 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@33 -- # sn=92596351 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 92596351 00:28:01.586 1 links removed 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@33 -- # sn=1053001723 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1053001723 00:28:01.586 1 links removed 00:28:01.586 23:54:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3917334 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3917334 ']' 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3917334 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917334 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917334' 00:28:01.586 killing process with pid 3917334 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@967 -- # kill 3917334 00:28:01.586 Received shutdown signal, test time was about 1.000000 seconds 00:28:01.586 00:28:01.586 Latency(us) 00:28:01.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.586 =================================================================================================================== 00:28:01.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.586 23:54:36 keyring_linux -- common/autotest_common.sh@972 -- # wait 3917334 00:28:01.844 23:54:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3917319 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3917319 ']' 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3917319 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917319 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917319' 00:28:01.844 killing process with pid 3917319 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@967 -- # kill 3917319 00:28:01.844 23:54:36 keyring_linux -- common/autotest_common.sh@972 -- # wait 3917319 00:28:02.408 00:28:02.408 real 0m4.920s 00:28:02.408 user 0m9.364s 00:28:02.408 sys 0m1.617s 00:28:02.408 23:54:37 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.408 23:54:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:02.408 ************************************ 00:28:02.408 END TEST keyring_linux 00:28:02.408 ************************************ 00:28:02.408 23:54:37 -- common/autotest_common.sh@1142 -- # return 0 00:28:02.408 23:54:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:02.408 23:54:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:02.408 23:54:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:02.408 23:54:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:02.408 23:54:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:02.408 23:54:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:02.408 23:54:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:02.408 23:54:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:02.408 23:54:37 -- common/autotest_common.sh@10 -- # set +x 00:28:02.408 23:54:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:02.408 23:54:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:02.408 23:54:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:02.408 23:54:37 -- common/autotest_common.sh@10 -- # set +x 00:28:04.307 INFO: APP EXITING 00:28:04.307 INFO: killing all VMs 00:28:04.307 INFO: killing vhost app 00:28:04.307 INFO: EXIT DONE 00:28:05.679 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:05.679 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:05.679 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:05.679 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:05.679 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:05.679 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:05.679 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:05.679 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:05.679 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:28:05.679 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:05.679 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:05.679 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:05.679 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:05.679 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:05.679 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:05.679 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:05.679 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:07.141 Cleaning 00:28:07.141 Removing: /var/run/dpdk/spdk0/config 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:07.141 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:07.141 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:07.141 Removing: /var/run/dpdk/spdk1/config 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:07.141 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:07.141 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:07.141 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:07.141 Removing: /var/run/dpdk/spdk2/config 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:07.141 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:07.141 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:07.141 Removing: /var/run/dpdk/spdk3/config 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:07.141 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:07.141 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:07.141 Removing: /var/run/dpdk/spdk4/config 00:28:07.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:07.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:07.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:07.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:07.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:07.142 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:07.142 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:07.142 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:07.142 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:07.142 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:07.142 Removing: /dev/shm/bdev_svc_trace.1 00:28:07.142 Removing: /dev/shm/nvmf_trace.0 00:28:07.142 Removing: /dev/shm/spdk_tgt_trace.pid3659858 00:28:07.142 Removing: /var/run/dpdk/spdk0 00:28:07.142 Removing: /var/run/dpdk/spdk1 00:28:07.142 Removing: /var/run/dpdk/spdk2 00:28:07.142 Removing: /var/run/dpdk/spdk3 00:28:07.142 Removing: /var/run/dpdk/spdk4 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3658317 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3659048 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3659858 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3660295 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3660982 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3661122 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3661840 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3661850 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3662094 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3663289 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3664328 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3664512 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3664704 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3664980 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3665195 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3665374 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3665536 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3665714 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3666025 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3668377 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3668539 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3668701 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3668714 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669135 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669144 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669575 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669578 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669872 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3669878 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3670042 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3670102 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3670541 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3670702 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3670893 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671063 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671215 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671274 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671548 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671709 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3671866 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3672034 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3672301 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3672454 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3672613 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3672889 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3673042 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3673201 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3673469 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3673637 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3673788 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674028 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674222 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674382 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674543 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674819 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3674975 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3675135 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3675320 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3675526 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3677702 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3704085 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3706715 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3713560 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3716851 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3719194 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3720036 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3724140 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3728053 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3728059 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3728709 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3729249 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3729912 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3730314 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3730320 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3730573 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3730710 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3730712 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3731303 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3731911 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3732571 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3732966 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3732974 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3733224 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3734124 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3734845 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3740203 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3740365 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3742983 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3746688 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3748854 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3755740 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3760958 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3762152 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3762823 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3773016 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3775137 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3799647 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3802433 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3803609 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3804928 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3805063 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3805154 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3805225 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3805659 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3806988 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3807797 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3808136 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3810257 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3810681 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3811128 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3813646 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3819544 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3822306 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3826074 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3827024 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3828114 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3830655 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3833014 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3837222 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3837233 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3839999 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3840144 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3840355 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3840655 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3840668 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3843429 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3843758 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3846405 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3848895 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3852303 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3855504 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3861845 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3866209 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3866211 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3878144 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3878552 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3878956 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3879461 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3880121 00:28:07.142 Removing: /var/run/dpdk/spdk_pid3880587 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3881503 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3881913 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3884407 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3884557 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3888448 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3888519 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3890126 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3895108 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3895164 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3897960 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3899344 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3900744 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3901601 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3903099 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3903891 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3909314 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3909676 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3910070 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3911693 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3912130 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3912568 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3915367 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3915503 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3916838 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3917319 00:28:07.400 Removing: /var/run/dpdk/spdk_pid3917334 00:28:07.400 Clean 00:28:07.400 23:54:42 -- common/autotest_common.sh@1451 -- # return 0 00:28:07.400 23:54:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:07.400 23:54:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.400 23:54:42 -- common/autotest_common.sh@10 -- # set +x 00:28:07.400 23:54:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:07.400 23:54:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.400 23:54:42 -- common/autotest_common.sh@10 -- # set +x 00:28:07.400 23:54:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:07.400 23:54:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:07.400 23:54:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:07.400 23:54:42 -- spdk/autotest.sh@391 -- # hash lcov 00:28:07.400 23:54:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:07.400 23:54:42 -- spdk/autotest.sh@393 -- # hostname 00:28:07.400 23:54:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:07.656 geninfo: WARNING: invalid characters removed from testname! 00:28:39.709 23:55:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:39.709 23:55:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:42.237 23:55:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:45.517 23:55:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:48.043 23:55:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:51.323 23:55:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:53.851 23:55:28 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:54.111 23:55:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.111 23:55:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:54.111 23:55:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.111 23:55:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.111 23:55:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.111 23:55:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.111 23:55:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.111 23:55:29 -- paths/export.sh@5 -- $ export PATH 00:28:54.111 23:55:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.111 23:55:29 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:54.111 23:55:29 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:54.111 23:55:29 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080529.XXXXXX 00:28:54.111 23:55:29 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080529.sWr8ST 00:28:54.111 23:55:29 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:54.111 23:55:29 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:54.111 23:55:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:54.111 23:55:29 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:54.111 23:55:29 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:54.111 23:55:29 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:54.111 23:55:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:54.111 23:55:29 -- common/autotest_common.sh@10 -- $ set +x 00:28:54.111 23:55:29 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:54.111 23:55:29 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:54.111 23:55:29 -- pm/common@17 -- $ local monitor 00:28:54.111 23:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.111 23:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.111 23:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.111 23:55:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:54.111 23:55:29 -- pm/common@21 -- $ date +%s 00:28:54.111 23:55:29 -- pm/common@21 -- $ date +%s 00:28:54.111 23:55:29 -- pm/common@25 -- $ sleep 1 00:28:54.111 23:55:29 -- pm/common@21 -- $ date +%s 00:28:54.111 23:55:29 -- pm/common@21 -- $ date +%s 00:28:54.111 23:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080529 00:28:54.111 23:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080529 00:28:54.111 23:55:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080529 00:28:54.111 23:55:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080529 00:28:54.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080529_collect-vmstat.pm.log 00:28:54.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080529_collect-cpu-load.pm.log 00:28:54.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080529_collect-cpu-temp.pm.log 00:28:54.111 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080529_collect-bmc-pm.bmc.pm.log 00:28:55.096 23:55:30 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:55.096 23:55:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:55.096 23:55:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.096 23:55:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:55.096 23:55:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:55.096 23:55:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:55.096 23:55:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:55.096 23:55:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:55.096 23:55:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:55.096 23:55:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:55.096 23:55:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:55.096 23:55:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:55.096 23:55:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:55.096 23:55:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.096 23:55:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:55.096 23:55:30 -- pm/common@44 -- $ pid=3926917 00:28:55.096 23:55:30 -- pm/common@50 -- $ kill -TERM 3926917 00:28:55.096 23:55:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.096 23:55:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:55.096 23:55:30 -- pm/common@44 -- $ pid=3926918 00:28:55.096 23:55:30 -- pm/common@50 -- $ kill -TERM 3926918 00:28:55.096 23:55:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.096 23:55:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:55.096 23:55:30 -- pm/common@44 -- $ pid=3926920 00:28:55.096 23:55:30 -- pm/common@50 -- $ kill -TERM 3926920 00:28:55.096 23:55:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:55.096 23:55:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:55.096 23:55:30 -- pm/common@44 -- $ pid=3926950 00:28:55.096 23:55:30 -- pm/common@50 -- $ sudo -E kill -TERM 3926950 00:28:55.096 + [[ -n 3574465 ]] 00:28:55.096 + sudo kill 3574465 00:28:55.105 [Pipeline] } 00:28:55.125 [Pipeline] // stage 00:28:55.131 [Pipeline] } 00:28:55.144 [Pipeline] // timeout 00:28:55.148 [Pipeline] } 00:28:55.164 [Pipeline] // catchError 00:28:55.169 [Pipeline] } 00:28:55.185 [Pipeline] // wrap 00:28:55.189 [Pipeline] } 00:28:55.201 [Pipeline] // catchError 00:28:55.210 [Pipeline] stage 00:28:55.212 [Pipeline] { (Epilogue) 00:28:55.226 [Pipeline] catchError 00:28:55.228 [Pipeline] { 00:28:55.243 [Pipeline] echo 00:28:55.244 Cleanup processes 00:28:55.250 [Pipeline] sh 00:28:55.535 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.535 3927071 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:55.535 3927185 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.551 [Pipeline] sh 00:28:55.837 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:55.837 ++ grep -v 'sudo pgrep' 00:28:55.837 ++ awk '{print $1}' 00:28:55.837 + sudo kill -9 3927071 00:28:55.850 [Pipeline] sh 00:28:56.134 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:04.250 [Pipeline] sh 00:29:04.537 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:04.537 Artifacts sizes are good 00:29:04.552 [Pipeline] archiveArtifacts 00:29:04.560 Archiving artifacts 00:29:04.790 [Pipeline] sh 00:29:05.075 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:05.091 [Pipeline] cleanWs 00:29:05.101 [WS-CLEANUP] Deleting project workspace... 00:29:05.101 [WS-CLEANUP] Deferred wipeout is used... 00:29:05.108 [WS-CLEANUP] done 00:29:05.110 [Pipeline] } 00:29:05.129 [Pipeline] // catchError 00:29:05.140 [Pipeline] sh 00:29:05.415 + logger -p user.info -t JENKINS-CI 00:29:05.422 [Pipeline] } 00:29:05.436 [Pipeline] // stage 00:29:05.441 [Pipeline] } 00:29:05.459 [Pipeline] // node 00:29:05.464 [Pipeline] End of Pipeline 00:29:05.486 Finished: SUCCESS